Proceedings

2009 International Conference on Computer Engineering and Technology

ICCET 2009

Volume I

Proceedings
2009 International Conference on Computer Engineering and Technology

Volume I

January 22 - 24, 2009 Singapore

Edited by Jianhong Zhou and Xiaoxiao Zhou Sponsored by

International Association of Computer Science & Information Technology

Los Alamitos, California Washington

Tokyo

Copyright © 2009 by The Institute of Electrical and Electronics Engineers, Inc. All rights reserved.
Copyright and Reprint Permissions: Abstracting is permitted with credit to the source. Libraries may photocopy beyond the limits of US copyright law, for private use of patrons, those articles in this volume that carry a code at the bottom of the first page, provided that the per-copy fee indicated in the code is paid through the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923. Other copying, reprint, or republication requests should be addressed to: IEEE Copyrights Manager, IEEE Service Center, 445 Hoes Lane, P.O. Box 133, Piscataway, NJ 08855-1331. The papers in this book comprise the proceedings of the meeting mentioned on the cover and title page. They reflect the authors’ opinions and, in the interests of timely dissemination, are published as presented and without change. Their inclusion in this publication does not necessarily constitute endorsement by the editors, the IEEE Computer Society, or the Institute of Electrical and Electronics Engineers, Inc. IEEE Computer Society Order Number P3521 BMS Part Number CFP0967F ISBN 978-0-7695-3521-0 Library of Congress Number 2008909477 Additional copies may be ordered from:
IEEE Computer Society Customer Service Center 10662 Los Vaqueros Circle P.O. Box 3014 Los Alamitos, CA 90720-1314 Tel: + 1 800 272 6657 Fax: + 1 714 821 4641 http://computer.org/cspress csbooks@computer.org IEEE Service Center 445 Hoes Lane P.O. Box 1331 Piscataway, NJ 08855-1331 Tel: + 1 732 981 0060 Fax: + 1 732 981 9667 http://shop.ieee.org/store/ customer-service@ieee.org IEEE Computer Society Asia/Pacific Office Watanabe Bldg., 1-4-2 Minami-Aoyama Minato-ku, Tokyo 107-0062 JAPAN Tel: + 81 3 3408 3118 Fax: + 81 3 3408 3553 tokyo.ofc@computer.org

Individual paper REPRINTS may be ordered at: <reprints@computer.org> Editorial production by Lisa O’Conner Cover art production by Joe Daigle/Studio Productions Printed in the United States of America by The Printing House

IEEE Computer Society

Conference Publishing Services (CPS)
http://www.computer.org/cps

2009 International Conference on Computer Engineering and Technology

ICCET 2009

Table of Contents
Volume - 1
Preface - Volume 1 ...........................................................................................................................................xiv ICCET 2009 Committee Members - Volume 1..................................................................................xvi ICCET 2009 Organizing Committees - Volume 1..........................................................................xvii

Session 1
Overlapping Non-dedicated Clusters Architecture .................................................................................................3
Martin Štava and Pavel Tvrdík

To Determine the Weight in a Weighted Sum Method for Domain-Specific Keyword Extraction ..................................................................................................................................................11
Wenshuo Liu and Wenxin Li

Flow-based Description of Conceptual and Design Levels ................................................................................16
Sabah Al-Fedaghi

A Method of Query over Encrypted Data in Database ........................................................................................23
Lianzhong Liu and Jingfen Gai

Traversing Model Design Based on Strong-Association Rule for Web Application Vulnerability Detection .......................................................................................................................28
Zhenyu Qi, Jing Xu, Dawei Gong, and He Tian

Attribute-Based Relative Ranking of Robot for Task Assignment ....................................................................32
B.B. Choudhury, B.B. Biswal, and R.N. Mahapatra

A Subjective Trust Model Based on Two-Dimensional Measurement .............................................................37
Chaowen Chang, Chen Liu, and Yuqiao Wang

A Genetic Algorithm Approach for Optimum Operator Assignment in CMS ................................................42
Ali Azadeh, Hamrah Kor, and Seyed-Morteza Hatefi

v

Dynamic Adaption in Composite Web Services Using Expiration Times .......................................................47
Xiaohao Yu, Xueshan Luo, Honghui Chen, and Dan Hu

An Emotional Intelligent E-learning System Based on Mobile Agent Technology .................................................................................................................................................................51
Zhiliang Wang, Xiangjie Qiao, and Yinggang Xie

Audio Watermarking for DRM Based on Chaotic Map ......................................................................................55
B. Lei and I.Y. Soon

Walking Modeling Based on Motion Functions ...................................................................................................60
Hao Zhang and Zhijing Liu

Preprocessing and Feature Preparation in Chinese Web Page Classification ..................................................64
Weitong Huang, Luxiong Xu, and Yanmin Liu

High Performance Grid Computing and Security through Load Balancing .....................................................68
V. Sugavanan and V. Prasanna Venkatesh

Research of the Synthesis Control of Force and Position in Electro-Hydraulic Servo System ..............................................................................................................................................................73
Yadong Meng, Changchun Li, Hao Yan, and Xiaodong Liu

Session 2
Features Selection Using Fuzzy ESVDF for Data Dimensionality Reduction ................................................81
Safaa Zaman and Fakhri Karray

PDC: Propagation Delay Control Strategy for Restricted Floating Sensor Networks .....................................................................................................................................................................88
Xiaodong Liu

Fast and High Quality Temporal Transcoding Architecture in the DCT Domain for Adaptive Video Content Delivery .....................................................................................................91
Vinay Chander, Aravind Reddy, Shriprakash Gaurav, Nishant Khanwalkar, Manish Kakhani, and Shashikala Tapaswi

Electricity Demand Forecasting Based on Feedforward Neural Network Training by a Novel Hybrid Evolutionary Algorithm ..........................................................................................98
Wenyu Zhang, Yuanyuan Wang, Jianzhou Wang, and Jinzhao Liang

Investigation on the Behaviour of New Type Airbag ........................................................................................103
Hu Lin, Liu Ping, and Huang Jing

Performance Evaluation of PNtMS: A Portable Network Traffic Monitoring System on Embedded Linux Platform ..................................................................................................................108
Mostafijur Rahman, Zahereel Ishwar Abdul Khalib, and R.B. Ahmad

PB-GPCT: A Platform-Based Configuration Tool .............................................................................................114
Huiqiang Yan, Runhua Tan, Kangyun Shi, and Fei Lu

A Feasibility Study on Hyperblock-based Aggressive Speculative Execution Model .........................................................................................................................................................................119
Ming Cong, Hong An, Yongqing Ren, Canming Zhao, and Jun Zhang

vi

Parallel Method for Discovering Frequent Itemsets Using Weighted Tree Approach ...................................................................................................................................................................124
Preetham Kumar and Ananthanarayana V S

Optimized Design and Implementation of Three-Phase PLL Based on FPGA .............................................129
Yuan Huimei, Sun Hao, and Song Yu

Research on the Data Storage and Access Model in Distributed Environment .............................................134
Wuling Ren and Pan Zhou

An Effective Classification Model for Cancer Diagnosis Using Micro Array Gene Expression Data .............................................................................................................................................137
V. Saravanan and R. Mallika

Study and Experiment of Blast Furnace Measurement and Control System Based on Virtual Instrument ..................................................................................................................................142
Shufen Li and Zhihua Liu

A New Optimization Scheme for Resource Allocation in OFDMA Based WiMAX Systems .....................................................................................................................................................145
Arijit Ukil, Jaydip Sen, and Debasish Bera

An Integration of CoTraining and Affinity Propagation for PU Text Classification ............................................................................................................................................................150
Na Luo, Fuyu Yuan, and Wanli Zuo

Session 3
Ergonomic Evaluation of Small-screen Leading Displays on the Visual Performance of Chinese Users ...............................................................................................................................157
Yu-Hung Chien and Chien-Cheng Yen

Curvature-Based Feature Extraction Method for 3D Model Retrieval ...........................................................161
Yujie Liu, Xiaolan Yao, and Zongmin Li

A New Method for Vertical Handoff between WLANs and UMTS in Boundary Conditions ..........................................................................................................................................166
Majid Fouladian, Faramarz Hendessi, Alireza Shafieinejad, Morteza Rahimi, and Mahdi M. Bayat

Research on Secure Key Techniques of Trustworthy Distributed System .....................................................172
Ming He, Aiqun Hu, and Hangping Qiu

WebELS: A Multimedia E-learning Platform for Non-broadband Users .......................................................177
Zheng He, Jingxia Yue, and Haruki Ueno

Implementation and Improvement Based on Shear-Warp Volume Rendering Algorithm ..................................................................................................................................................................182
Li Guo and Xie Mei

Conferencing, Paging, Voice Mailing via Asterisk EPBX ................................................................................186
Ale Imran and Mohammed A. Qadeer

A New Mind Evolutionary Algorithm Based on Information Entropy ...........................................................191
Yuxia Qiu and Keming Xie

vii

An Encapsulation Structure and Description Specification for Application Level Software Components ..................................................................................................................................195
Jin Guojie and Yin Baolin

Fault Detection and Diagnosis of Continuous Process Based on Multiblock Principal Component Analysis ..............................................................................................................................200
Libo Bie and Xiangdong Wang

Strong Thread Migration in Heterogeneous Environment ................................................................................205
Khandakar Entenam Unayes Ahmed, Md. Al-mamun Shohag, Tamim Shahriar, Md. Khalad Hasan, and Md. Mashud Rana

A DSP-based Active Power Filter for Three-phase Power Distribution Systems .........................................210
Ping Wei, Zhixiong Zhan, and Houquan Chen

Access Control Scheme for Workflow .................................................................................................................215
Lijun Gao, Lu Zhang, and Lei Xu

A Mathematical Model of Interference between RFID and Bluetooth in Fading Channel ......................................................................................................................................................................218
Junjie Chen, Jianqiu Zeng, and Yuchen Zhou

Optimization Strategy for SSVEP-Based BCI in Spelling Program Application ..........................................223
Indar Sugiarto, Brendan Allison, and Axel Gräser

Session 4
A Novel Method for the Web Page Segmentation and Identification .............................................................229
Jing Wang and Zhijing Liu

Disturbance Observer-Based Variable Structure Control on the Working Attitude Balance Mechanism of Underwater Robot ..........................................................................................232
Min Li and Heping Liu

Adaptive OFDM Vs Single Carrier Modulation with Frequency Domain Equalization ..............................................................................................................................................................238
Inderjeet Kaur, Kamal Thakur, M. Kulkarni, Daya Gupta, and Prabhjyot Arora

A Bivariate C1 Cubic Spline Space on Wang's Refinement .............................................................................243
Huan-Wen Liu and Wei-Ping Lu

Fast Shape Matching Using a Hybrid Model ......................................................................................................247
Gang Xu and Wenxian Yang

A Multi-objective Genetic Algorithm for Optimization of Cellular Manufacturing System ............................................................................................................................................252
H. Kor, H. Iranmanesh, H. Haleh, and S.M. Hatefi

A Formal Mapping between Program Slicing and Z Specifications ...............................................................257
Fangjun Wu

Modified Class-Incremental Generalized Discriminant Analysis ....................................................................262
Yunhui He

Controlling Free Riders in Peer to Peer Networks by Intelligent Mining .......................................................267
Ganesh Kumar. M, Arun Ram. K, and Ananya. A.R

viii

Servo System Modeling and DSP Code Autogeneration Technology for Open-CNC ..........................................................................................................................................................272
Shukun Cao, Heng Zhang, Li Song, Changsheng Ai, and Xiangbo Ze

Extending Matching Model for Semantic Web Services ...................................................................................276
Alireza Zohali, Kamran Zamanifar, and Naser Nematbakhsh

Sound Absorption Measurement of Acoustical Material and Structure Using the Echo-Pulse Method ...........................................................................................................................................281
Liang Sun, Hong Hou, Liying Dong, and Fangrong Wan

Parallel Design of Cross Search Algorithm in Motion Estimation ..................................................................286
Fan Zhang

Influences of DSS Environments and Models on Current Business Decision and Knowledge Management ................................................................................................................................291
Md. Fazle Munim and Fatima Binte Zia

A Method for Transforming Workflow Processes to CSS ................................................................................295
Jing Xiao, Guo-qing Wu, and Shu Chen

Session 5
An Empirical Approach of Delta Hedging in GARCH Model .........................................................................303
Qian Chen and Chengzhe Bai

Multi-objective Parameter Optimization Technology for Business Process Based on Genetic Algorithm ..................................................................................................................................308
Bo Wang, Li Zhang, and Yawei Tian

Analysis and Design of an Access Control Model Based on Credibility ........................................................312
Chaowen Chang, Yuqiao Wang, and Chen Liu

Modeling of Rainfall Prediction over Myanmar Using Polynomial Regression ...........................................316
Wint Thida Zaw and Thinn Thu Naing

New Similarity Measure for Restricted Floating Sensor Networks .................................................................321
Yuan Feng, Xiaodong Liu, and Xiangqian Ding

3D Mesh Skeleton Extraction Based on Feature Points ....................................................................................326
Faming Gong and Cui Kang

Pairings Based Designated Verifier Signature Scheme for Three-Party Communication Environment ................................................................................................................................330
Han-Yu Lin and Tzong-Sun Wu

A Novel Shared Path Protection Scheme for Reliability Guaranteed Connection ................................................................................................................................................................334
Jijun Zhao, Weiwei Bian, Lirong Wang, and Sujian Wang

Generalized Program Slicing Applied to Z Specifications ................................................................................338
Fangjun Wu

PC Based Weight Scale System with Load Cell for Product Inspection ........................................................343
Anton Satria Prabuwono, Habibullah Akbar, and Wendi Usino

ix

Short-Term Electricity Price Forecast Based on Improved Fractal Theory ....................................................347
Herui Cui and Li Yang

BBS Sentiment Classification Based on Word Polarity ....................................................................................352
Shen Jie, Fan Xin, Shen Wen, and Ding Quan-Xun

Applying eMM in a 3D Approach to e-Learning Quality Improvement ........................................................357
Kattiya Tawsopar and Kittima Mekhabunchakij

Research on Automobile Driving State Real-Time Monitoring System Based on ARM .....................................................................................................................................................................361
Hongjiang He and Yamin Zhang

Information Security Risk Assessment and Pointed Reporting: Scalable Approach ...................................................................................................................................................................365
D.S. Bhilare, A.K. Ramani, and Sanjay Tanwani

An Extended Algorithm to Enhance the Performance of the Gridbus Broker with Data Restoring Technique .............................................................................................................................371
Abu Awal Md. Shoeb, Altaf Hussain, Md. Abu Naser Bikas, and Md. Khalad Hasan

Session 6
Prediction of Ship Pitching Based on Support Vector Machines .....................................................................379
Li-hong Sun and Ji-hong Shen

The Methods of Improving the Manufacturing Resource Planning (MRP II) in ERP ........................................................................................................................................................................383
Wenchao Jiang and Jingti Han

A New Model for Classifying Inputs and Outputs and Evaluating the DMUs Efficiency in DEA Based on Cobb-Douglas Production Function ..................................................................390
S.M. Hatefi, F. Jolai, H. Kor, and H. Iranmanesh

The Analysis and Improvement of the Price Forecast Model Based on Fractal Theory ........................................................................................................................................................................395
Herui Cui and Li Yang

A Flash-Based Mobile Learning System for Learning English as Second Language ...................................................................................................................................................................400
Firouz B. Anaraki

Recognition of Trade Barrier Based on General RBF Neural Network ..........................................................405
Yu Zhao, Miaomiao Yang, and Chunjie Qi

An Object-Oriented Product Data Management .................................................................................................409
Fan Wang and Li Zhou

Study of 802.11 Network Performance and Wireless Multicasting .................................................................414
Biju Issac

A Novel Approach for Face Recognition Based on Supervised Locality Preserving Projection and Maximum Margin Criterion ....................................................................................419
Jun Kong, Shuyan Wang, Jianzhong Wang, Lintian Ma, Baowei Fu, and Yinghua Lu

x

Association Rules Mining Based on Simulated Annealing Immune Programming Algorithm .........................................................................................................................................424
Yongqiang Zhang and Shuyang Bu

Processing Power Estimation of Simple Wireless Sensor Network Nodes by Power Macro-modeling .....................................................................................................................................428
M. Rafiee, M.B. Ghaznavi-Goushchi, and B. Seyfe

A Fault-Tolerant Strategy for Multicasting in MPLS Networks ......................................................................432
Weili Huang and Hongyan Guo

A Novel Content-based Information Hiding Scheme ........................................................................................436
Jun Kong, Hongru Jia, Xiaolu Li, and Zhi Qi

Ambi Graph: Modeling Ambient Intelligent System .........................................................................................441
K. Chandrasekaran, I.R. Ramya, and R. Syama

Session 7
Research on Grid-based Short-term Traffic Flow Forecast Technology ........................................................449
Wang Xinying, Juan Zhicai, Liu Xin, and Mei Fang

A Nios II Based English Speech Training System for Hearing-Impaired Children .....................................................................................................................................................................452
Ningfeng Huang, Haining Wu, and Yinchen Song

A New DEA Model for Classification Intermediate Measures and Evaluating Supply Chain and its Members ..............................................................................................................................457
S.M. Hatefi, F. Jolai, H. Iranmanesh, and H. Kor

A Novel Binary Code Based Projector-Camera System Registration Method ..............................................462
Jiang Duan and Jack Tumblin

Non-temporal Mutliple Silhouettes in Hidden Markov Model for View Independent Posture Recognition ..........................................................................................................................466
Yunli Lee and Keechul Jung

Classification of Quaternary [21s+1,3] Optimal Self-orthogonal Codes ........................................................471
Xuejun Zhao, Ruihu Li, and Yingjie Lei

Performance Analysis of Large Receive Offload in a Xen Virtualized System ...........................................475
Hitoshi Oi and Fumio Nakajima

An Improved Genetic Algorithm Based on Fixed Point Theory for Function Optimization .............................................................................................................................................................481
Jingjun Zhang, Yuzhen Dong, Ruizhen Gao, and Yanmin Shang

Example-Based Regularization Deployed to Face Hallucination ....................................................................485
Hong Zhao, Yao Lu, Zhengang Zhai, and Gang Yang

An Ensemble Approach for Semantic Assessment of Summary Writings .....................................................490
Yulan He, Siu Cheung Hui, and Tho Thanh Quan

A Fast Reassembly Methodology for Polygon Fragment ..................................................................................495
Gang Xu and Yi Xian

xi

A Data Mining Approach to Modeling the Behaviors of Telecom Clients ....................................................500
Xiaodong Liu

Simulating Fuzzy Manufacturing System: Case Study ......................................................................................505
A. Azadeh, S.M. Hatefi, and H. Kor

Research of INS Simulation Technique Based on UnderWater Vehicle Motion Model .........................................................................................................................................................................510
Jian-hua Cheng, Yu-shen Li, and Jun-yu Shi

Modeling and Simulation of Wireless Sensor Network (WSN) with SpecC and SystemC .............................................................................................................................................................515
M. Rafiee, M.B. Ghaznavi-Ghoushchi, S. Kheiri, and B. Seyfe

Session 8
Sub-micron Parameter Scaling for Analog Design Using Neural Networks ..................................................523
A.A. Bagheri-Soulla and M.B. Ghaznavi-Ghoushchi

An Improved Genetic Algorithm Based on Fixed Point Theory for Function Optimization .............................................................................................................................................................527
Jingjun Zhang, Yuzhen Dong, Ruizhen Gao, and Yanmin Shang

P2DHMM: A Novel Web Object Information Extraction Model ....................................................................531
Jing Wang and Zhijing Liu

An Efficient Multi-Patterns Parameterized String Matching Algorithm with Super Alphabet ................................................................................................................................................536
Rajesh Prasad and Suneeta Agarwal

Research on Modeling Method of Virtual Enterprise in Uncertain Environments ............................................................................................................................................................541
Jihai Zhang

Design of Intrusion Detection System Based on a New Pattern Matching Algorithm ..................................................................................................................................................................545
Hu Zhang

To Construct Implicit Link Structure by Using Frequent Sequence Miner (FS-Miner) ................................................................................................................................................................549
May Thu Aung and Khin Nwe Ni Tun

Recognition of Eye States in Real Time Video ...................................................................................................554
Lei Yunqi, Yuan Meiling, Song Xiaobing, Liu Xiuxia, and Ouyang Jiangfan

Performance Analysis of Postprocessing Algorithm and Implementation on ARM7TDMI .......................................................................................................................................................560
Manoj Gupta, B.K. Kaushik, and Laxmi Chand

NURBS Interpolation Method with Feedrate Correction in 3-axis CNC System .........................................565
Liangji Chen and Huiying Li

Implementation Technique of Unrestricted LL Action Grammar ....................................................................569
Jing Zhang and Ying Jin

xii

... and R........ Aljunid.578 Zhongwen Guo.... Xiaoping Chen.................................Volume 1 ................................ Zhengbao Li......................................................................................................................................................................... and Xianting Zeng Author Index ......................573 Hilal Adnan Fadhil................................593 xiii ..................... S........................................................ and Feng Hong Mathematical Document Retrieval for Problem Solving ...............588 Zhuo Li...................................... Badlishah Ahmad USS-TDMA: Self-stabilizing TDMA Algorithm for Underwater Wireless Sensor Network .................................................................................A.................................. Xuezeng Pan....583 Sidath Harshanath Samarasinghe and Siu Cheung Hui Lossless Data Hiding Scheme Based on Adjacent Pixel Difference ...Improving BER Using RD Code for Spectral Amplitude Coding Optical CDMA Network .............................

Preface Dear Distinguished Delegates and Guests. new ideas. If you have attended a conference sponsored by IACSIT before. These conferences are aimed at discussing with all of you the wide range of problems encountered in present and future high technologies. significance. 2009 in Singapore. The main goal of these events is to provide international scientific forums for exchange of new ideas in a number of fields that interact in-depth through discussions with their peers from around the world. and the accepted papers of ICECS 2008 have been included in the ICCET proceeding as a special session. ICACC 2009 and ICECS 2008 are sponsored by International Association of Computer Science and Information Technology (IACSIT) and Singapore Institute of Electronics (SIE). featuring high-impact presentations. and clarity for the purpose of the conference. and research results about all aspects of the main conference themes and tracks and discuss the practical challenges encountered and the solutions adopted. multi-disciplinary. The main conference themes and tracks are Computer Engineering and Technology. core areas of computer control and outward research. This proceeding records the fully refereed papers presented at the conference. The selected papers and additional late-breaking contributions to be presented as lectures will make an exiting technical program. you are aware that the conferences together report the results of research efforts in a broad range of computer science. with membership from the Americas. After the rigorous peer-review process. All the submitted papers in the proceeding have been peer reviewed by the reviewers drawn from the scientific committee. Europe. The conference program is extremely rich.24. held on January 22 . ICACC 2009 and ICECS 2008 are organized to gather members of our international community of computer and control scientists so that researchers from around the world can present their leading-edge work. Both inward research. and practitioners to exchange and share their experiences. xiv . scientists. Africa and Oceania. Asia. The conference aims to bring together researchers. The Organizing Committee warmly welcomes our distinguished delegates and guests to the International Conference on Computer Engineering and Technology 2009 (ICCET 2009). expanding our community’s knowledge and insight into the significant challenges currently being addressed in that research. The ICCET 2009. The conference has solicited and gathered technical research submissions related to all aspects of major conference themes and tracks. and applications will be covered during these events. external reviewers and editorial board depending on the subject matter of the paper. the submitted papers were selected on the basis of originality. ICCET 2009. Reviewing and initial selection were undertaken electronically. engineers. inter-disciplinary. The conference Program Committee is itself quite diverse and truly international.

Thanks also go to Ms.The high quality of the program – guaranteed by the presence of an unparalleled number of internationally recognized top experts – can be assessed when reading the contents of the program. We would like to thank the program chairs. from academia and from industry. ICACC 2009 and ICECS 2008 in Singapore. organization staff. The program has been structured to favor interactions among attendees coming from many diverse horizons. geographically. Conference Publishing Services (CPS). The conference will therefore be a unique event. rewarding and enjoyable week at ICCET 2009. Yi Xie January 22 . We hope you have a unique. Lisa O'Conner.24. We hope that all participants and other interested readers benefit scientifically from the proceedings and also find it stimulating in the process. we would like to wish you success in your technical presentations and social networking. for her wonderful editorial service to this proceeding. IEEE Computer Society. 2009 Singapore xv . and the members of the program committees for their work. We are grateful to all those who have contributed to the success of ICCET 2009. and to acquire additional knowledge in other fields. scientifically. Finally. where attendees will be able to appreciate the latest results in their field of expertise. With our warmest regards. CPS Production Editor. Included in this will to favor interactions are social events at prestigious sites.

USA Zhihong Xiao. Francis Xavier University. University of Newcastle. Jilani. Jinwen University of Science and Technology. Pakistan Qian Chen. University of Gottingen. Iowa State University.. Iran Yang Laurence T. West Bengal University of Technology. Karunya University. India Yi Xie. France Amir Masoud Rahmani. Malaysia Poramate Manoonpong. Philippines Sevaux Marc.Saravanan. USA Wen-Tsao Pan. Qingdao Technological University. Lakshmi.. Canada Hrudaya Ku Tripathy. University of Central Florida. Australia Jinlong Wang. Islamiz Azad University. St.ICCET 2009 Committee Members V. University of New Brunswick. India Gunter Glenda A. Swinburne University of Technology Sarawak. University of Karachi. University of South-Brittany. Canada Lau Bee Theng. Institute of Advanced Computer and Research. Tianjin University. Cagayan State University. Germany Tahseen A. China xvi . China Mahanti Prabhat Kumar. China Amrita Saha. USA Anupam Shukla. India Wei Guo. Columbia University. China (Taiwan) Gopalakrishnan Kasthurirangan. India Narasimhan V. Indian Institute of Information Technology. Zhejiang Wanli University.

Cagayan State University. Lebanon Publicity Chairs Basim Alhadidi.ICCET 2009 Organizing Committees Honor Chairs R. Aqil Burney. Vietnam Kamaruzaman Jusoff. Pinter. Dalhousie University. University of South Florida. USA A. Bhadra Chaudhuri. Hue University. Pakistan Brian B. C. Korea xvii . USA Program Committee Chairs S. Purdue University. Chungbuk National University. University of Central Punjab. Nanyang Technological University. Eberhart.M. Lebanese American University. Kandel. Singapore Conference Steering Committee Yi Xie. Aqeel Iqbal. Pakistan Nashat Mansour. Bengal Engineering and Science University. Sichuan University Xiaoxiao Zhou. Jordan M. Canada Conference Chairs S. Philippines Hoang Huu Hanh. Al Balqa’ Applied University. Shinn. Pakistan Nazir Ahmad Zafar. USA J.R. Foundation University. Yale University. University of Karachi.D. India Jianhong Zhou.C.

.

International Conference on Computer Engineering and Technology Session 1 .

.

Second important aspect for attracting users to participate in non-dedicated clusters is a trust relationship. An extension of dedicated clusters are non-dedicated clusters [5]–[8]. 978-0-7695-3521-0/09 $25. E XISTING ARCHITECTURES The most common form of clustering are dedicated clusters. The most interesting method used. I NTRODUCTION Clusters build from commodity computers are a popular computational platform. the volunteers cannot use the earned cluster computing power directly from their machine. Some projects offer only a good feeling from the participation.2009. the participation may be enforced by a system administrator.2009 International Conference on Computer Engineering and Technology Overlapping Non-Dedicated Clusters Architecture ˇt Martin Sˇava and Pavel Tvrd´k ı Department of Computer Science and Engineering Czech Technical University in Prague Prague. II. all machines are fully dedicated to the cluster. we argue how an ability to run multiple independent clusters without requiring trust among participating users can be capitalized to increase user experience and thus attract more users to participate in the cluster.cvut. but he can not. In case there is just a single instance of non-dedicated cluster running. if there is support for coexistence of multiple clusters.tvrdik}@fel. but they do not fully belong to the cluster even at the time they are joined. This is a reasonable limitation.00 © 2009 IEEE DOI 10. The non-dedicated machines can join or leave cluster at any time. In this case. seems to be a reciprocal offer of cluster computing power to volunteering users. Then we present a relaxation of existing architecture concepts and argue abouts its advantage over existing systems. Existing non-dedicated clustering solutions either expect trust among participating users. In these clusters. In this paper. since he can perform the cluster operations only from the cluster itself. needs a well suited cluster architecture. Any such a trust requirement complicates forming and expansion of the cluster. Kerrighed [9] or OpenSSI [10] are well known examples of such clusters. Methods of attracting users to participate vary. they all share user account-space. A feasibility of the architecture is demonstrated for one particular case on our research clustering solution called Clondike [8]. however. Users offering their computers as non-dedicated computing resources should not be required to fully trust the cluster administrators and neither should the administrators be required to trust the users. I. and file system. In this paper. This separation . [5]. process space. [2]. the users are granted a computing power of the cluster proportional to the power they have given to the cluster. III. These projects rely on users offering their idle computers to participate in a cluster. In other environments. since the motivation for developing SSI clusters is. not from his machine. A concept of traditional dedicated clusters was extended by several existing projects [5]–[8] to support utilization of non-dedicated computers. They are used as a cost-effective alternative to expensive supercomputers [1]. like university laboratories. a user who is about to perform a parallel compilation on his machine would like to use his granted cluster time. A generic extension of non-dedicated clusters that satisfies these requirements is defined and a feasibility of one particular extension is demonstrated on our implementation. or they do not take into account a possibility of running multiple independent clusters on a same set of computers. the machines of volunteering users can form their own clusters and the users can use the granted cluster time transparently from their machines. For example. forming a core of the cluster. The reciprocal computing power trading. Czech Republic {stavam2. in improving user experience. These clusters consist of one or more dedicated machines. and any number of non-dedicated machines. On the other hand.cz Abstract—Non-dedicated computer clusters promise more efficient resource utilization than conventional dedicated clusters. however. usually standard workstations. we first briefly review the most important existing architectures.66 3 Such a scenario is clearly not well suited for the volunteering users. similarly as in our architecture. instead they first need to login to the cluster and perform their resource demanding computations there. S COPE We primarily focus on clusters attempting to provide a single system image (SSI) illusion at the operating system level.1109/ICCET. as a scalable high available solution for commercial applications [3] as well as load leveling clusters for ordinary day-to-day use [4].

The upper left node is forming a core of non-dedicated cluster and is using 2 other nodes as non-dedicated nodes. all SSI views participated by the node. These two attributes imply a need for strong security model of the architecture implementation. . In contrast. In addition. The concept of overlapping clusters is similar to virtual organizations mechanisms (VO) [13]. 2. Another significant architecture are multi-clusters. should be separated from each other. V. The biggest problem with Mosix is that it requires a full trust among all participating nodes (and it assumes either shared user account space or a consensus on mapping of user ids). but rather a different SSI depending on a machine they are logged in. Figures 1. This is not really an architecture of a single cluster.is often achieved by running the cluster code inside virtual machines running on the non-dedicated machines. thus it is a super-set of both. When some node is used as a non-dedicated block. The existing grid solutions are close to ONDC especially when ONDC is used for a large scale deployment in a distributed area. its own view of SSI still exists. As implied by the definition. but the key factor is that each node forms a core of its own SSI cluster. C OMPARISON WITH OTHER ARCHITECTURES illustrate schematically the difference among the three types of clusters. Cluster machines do not share common file-system. which may be a limiting factor in real deployments. distinguishing explicitly between clusters and single machines where required. ONDC can be useful as well for a local resource sharing. current projects require trust among the participating clusters. we will refer to it in the paper as an overlapping non-dedicated cluster (ONDC). and process space. we define our envisioned architecture. A node can be used as a non-dedicated node by more than one node. file system. We will refer to these blocks uniformly as nodes in both cases. P ROPOSED ARCHITECTURE Figure 1. using the other nodes as non-dedicated blocks. like XtreemOS [15]. IV. and 3 4 Figure 3. but an ONDC cluster may be a good candidate for that. An interesting alternative to standard architectures is represented by the openMosix [4]/Mosix2 [11] projects. Every node can possibly form its own administrative domain. Modern grid solutions. there should be no requirement for trust among participating nodes. ONDC. Users of a cluster are not provided standard SSI features. Similarly as Mosix. In that case. but they are still assumed to share the user account-space. Multi-clusters are gaining popularity during last years as a next logical step towards an idealized grid solution. Mosix with its architecture is very close to the ONDC with all nodes as single machines. Multi-clusters are as well a special case of ONDC. including the local view. All clusters are isolated from each other. All nodes belong to a single cluster. There can be mixed environments. where some blocks are single machines and some are clusters. The nodes can be connected in an arbitrary way. but the term refers to clusters interconnected together. The ONDC is an extension of the non-dedicated clusters. but its architecture seems to be driven more by technical aspects than by an intentional design. The main difference is that the grid solutions are primarily targeted only on large scale deployments. Moreover. Dedicated cluster. are often based on VOs. The main difference between VO and ONDC concept is that the virtual organizations are designed for a mutual cooperation Non-dedicated clusters are an extension of dedicated clusters. In this section. Examples of projects supporting the multi-cluster architecture are Mosix2 [11] or LSF [12]. Non dedicated cluster. User having a few machines at home would likely not use any of grid solutions to interconnect them. its SSI view is fully separated from the SSI view of the node that is using this node as its non-dedicated node. All nodes are using the other nodes as non-dedicated nodes. A basic building block of the proposed cluster can be either a single machine or a dedicated SSI cluster. By forming a core we mean that it defines its account space. Nothing prevents 2 nodes to interact with each other. Because of the nature of the proposed solution. Figure 2. using each other as its non-dedicated node. [14]. every node can use the others as its non-dedicated nodes.

For the rest of the time most of the cores would be idle. since users can be rewarded by their offered time with the proportional cluster computing power. which itself may be sufficient to deter users from joining. If there are trust requirements. when some core fails. U SE CASES In order to better illustrate the architecture. This is. Most of the cores would be utilized for a limited time when cpu intensive parallelized tasks are running. Currently. R ELATIONSHIP WITH MULTI CORE COMPUTING Any project being developed should plan for the future as well. indeed. but all other clusters are still functional (they just possibly loose some processes running on the crashed node). with ONDC not only single volunteer computers can be connected. while our solution does not require any relation among cluster users. based on the assumption that the users does not know each other and have a very limited trust among themselves. Fault-tolerance in standard nondedicated clusters relies on fault-tolerance mechanisms of its dedicated core. using resources of any other computer which is not being used at the moment. users can easily join the cluster. As long as they are already taking use of (compatible) ONDC infrastructure. but whole clusters and multi-clusters can be connected and offer their spare computing power. First. For such a project. Clearly. By allowing a coexistence of multiple independent clusters we enable a natural user rewarding mechanism. Another advantage of allowing each node to form its own cluster is a natural option of coexistence of different installations of clusters (even with conflicting versions of software installed on them) on the same physical hardware. ONDC is. as this is directly applicable to them. It is always hard to predict future hardware evolution. the computers in the laboratory can use his laptop. By relaxing the trust requirement. An important architectural advantage of the ONDC architecture is a better fault-tolerance with respect to standard non-dedicated clusters. Actually. the ONDC architecture can be used as a world-wide cluster. but commonly believed future development in the industry is that the commodity computers are going to have more and more cores. if it is idle). this does not increase fault-tolerance of any cluster in the ONDC. ONDC can contribute a lot to such environments. we will describe a few possible use-cases of ONDC in this section. VII. VIII. VI. In a largest scale. underlined by the existence of commercial solutions like Mosix2 or LSF. in IT companies with frequent application compilations or in graphical studios where rendering represents the most CPU intensive operations. there is either a single cluster shared by all users (enforcing the same environment for them).agreement and some degree of trust. each computer can form its own cluster. A DVANTAGES OF THE ONDC OVER OTHER ARCHITECTURES The key advantage of the ONDC architecture is an unique combination of a system without trust requirements and ability to form a separate cluster from each participating node. These are just an illustrative examples. users generally have to undergo some registration and possibly a (mutual) trust review process. 5 The smaller scale example can be a university computer laboratory. where the users can send their jobs to be processed. if any user brings his own laptop. indeed. the cluster formed by this core stops to work. For example. The ability of each node to form its own cluster (and hence export its file system) is another factor contributing to easy expansion of a cluster. In ONDC. such patterns are already seen now with current machines. There are 2 main factors. ONDC based solutions could possibly attract larger user base. if the clustering is to be used in this environment. while the cores themselves will not become much faster. The existing solutions could benefit especially from the security research of the ONDC architecture. In addition. When the core fails. In addition. the whole cluster fails. where the users can get back the resources offered to the cluster by using the other nodes. similar to the SETI [16] or BOINC [1] projects. he can simply plug it to the network. This combination of features has a high potential for attracting users to participate in the cluster. The cluster nodes can immediately use his machine as a non-dedicated node and the user’s node can as well immediately use all other machines. Any user needs just a common ONDC code and does not need to install or configure anything specific for the other clusters. Another use case of ONDC are the multi-clusters. He does not need any administrator privileges for the computers in the network. and start using the other computers as non-dedicated nodes of a cluster based on his laptop (and of course. The cpu utilization patterns we can expect are as follows. With ONDC. or there is some non-transparent job scheduling system. the developers of software will have to focus more on the parallelization of common programs. There is a clear demand for such computing platforms. But the non-dedicated clusters are generally based on the idea of utilizing idle machines and the ONDC allows continuous utilization of those idle machines even in presence of some cluster failures. Properly parallelized applications can use other machines computing . a standard non-dedicated cluster can be sufficient. The ONDC architecture targets on using commodity computers. This idea was already leveraged by successful projects like BOINC [1]. it would be a simple configuration task to let the participation in the wider cluster build on ONDC. a perfect match for ONDC architectures. All of the mentioned examples can coexist and cooperate as a single instance of ONDC.

especially to measure impact of such a sharing. it can act as a core of its own cluster. The second factor contributing to non-dedicated clusters generally is the observation that some programs are not easily parallelizable (either due to the nature of algorithms or due to a prohibiting complexity of such a parallelization). Machines of these users are good candidates to participate as nondedicated nodes in any non-dedicated clustering solution. there is a userspace part of the system. Clondike is still in an experimental research phase. Gamma is then acting as a detached node for 2 independent clusters (and as a core node of its own cluster). power consumption. In order to allow overlapping clusters. that is kept as minimal as possible so that upgrades to new kernels are not unduly complicated. A. interacting with it via a special control file system (exporting cluster specific data about processes). A key feature of Clondike is a support for both preemptive and non-preemptive process migration based on process checkpointing. [19] studied in a standard non-dedicated Clondike environment [20] seems to be a promising candidate for our goals. . With the migration support. C. disk I/O. The system needs to allow coexistence of a core node and a detached node on a single physical machine. since the usage of another core is not for free (cache conflicts. Therefore Alfa will use Beta and Gamma as detached nodes. This is a typical setup of any non-dedicated cluster. there can be often some spare resources available in the network (of course. we have started development of our own implementation of the architecture. monitoring or information distribution. It makes use of the kernel part of the system. we would like them to use all other nodes as detached nodes. changes required to meet the ONDC and how we address 2 key topics . the biggest problem here can be support for distributed shared memory and I/O bounded tasks. while Beta will use Alfa and Gamma as detached nodes. Assuming that the limited high cpu demand periods do not always overlap for all machines. bus contention. called core node and a number of non-dedicated nodes called detached nodes. Economy inspired market based schedulers [18].) and the cpu is not the only resource consumed by a running application (memory usage. The userspace part performs all tasks that do not need to be directly in kernel. Technical background The implementation consists of 3 parts. Clearly. like scheduling. it is possible to utilize detached nodes that would sit idle otherwise. These modules implement most of the lowest level functionality required for process migration and actually the process migration support itself. D. The existing projects closest to the ONDC architecture are multi-clusters and Mosix. an extension to standard Clondike system was required. etc. 6 B. Scheduling As outlined in previous sections. Finally. Beta. The patch consist mostly of a simple hooks for second part of the system that are a kernel modules. Second related extension is an ability to act as a detached node of multiple independent clusters. but this is more technical question and is out of the scope of this paper). more research is needed in this area. etc. This way. This is technically a most complicated part and a description of this implementation is out of the scope of the paper. Clondike The original idea of Clondike [8]. The usage of detached nodes is continuously monitored by the core node and if there is some opportunity to migrate a core node local process to idle detached node.power at the time of high CPU demand. IX. although some may contain a whole dedicated cluster as their core. network. Clusters based on Clondike consist of one dedicated node. To verify our ideas. while still offering its resources to another cluster. but it already supports most of the requirements on such systems. the migration mechanisms are used. Changes required to support ONDC Clondike was since the beginning designed to allow cooperation of untrusted parties. I MPLEMENTATION The proposed ONDC architecture is quite generic and there may be many implementations. Mosix multi cluster solution has some support for overlapping clusters and the market based schedulers are one of the scheduling options used [21]. if we have for example cluster of 3 nodes (Alfa. the main advantage of overlapping clusters support is a possibility of a scheduling based on reciprocal computing power exchange.). and Gamma) each of them forming its cluster. In this section we will briefly describe our system and its technical background. to address the trust less nodes cooperation requirements. [17] was to implement a first non-dedicated clustering solution based on the Linux operating system. so this functionality did not require any modification to match ONDC requirements. however. From a practical point of view (coding and debugging) it is a big advantage to put as much functionality as possible to userspace. The implementation details can be found in [17]. The lowest level is a kernel patch. This is a natural requirement. system signals (for requesting migration) and as well using a standard linux kernel netlink sockets when a bidirectional interaction is required (for example non-preemptive scheduling decisions on process execution). Users who work mostly with such programs would have some of their cores idle for most of the time. They fail.scheduling and security/trust handling.

he needs to login to Alfa and execute some process there (since Beta is only a detached node). even if some scheduler requests such a migration (so. the line security must be ensured. do not emigrate task. he is refused as long as the local tasks are running). since that machine forms a cluster on its own. To honor the local user priority over cluster users. Despite the apparent suitability. but not directly on his machine (since his machine acts as a non-dedicated node only. [24]. First. there should be no trust requirement between node owners. but this is not always the case.) E. In standard non-dedicated clusters with market based schedulers. Security and trust The security functionality and trust management of original Clondike system is directly applicable in ONDC environment. so only non preemptive migration is used. When somebody runs a calculation on Alfa that will use resources on Beta. the node with some active local tasks does not accept any cluster jobs. Each cluster in the ONDC version of Clondike has its own scheduler running on the cluster core node. The user offers his machine to the cluster and gets some credit for that. the situation is a bit cumbersome. This strategy is very simple and has many problems in a real life. He must login to the cluster core and he can use his credit only there. an owner of machine forming cluster core node can try to send malicious code to remote machine to get access to that node. Moreover. which is preferred for some tasks). In contrast. if machine Alfa is running a local user’s job and a scheduler on machine Beta requests a job migration on Alfa. In this section we will briefly review the mechanisms used. To illustrate the difference between non-dedicated cluster and ONDC we will use an example of 2 nodes cluster. the owner of Beta will get a credit to run a comparatively expensive calculation on cluster formed by Alfa. the market based scheduling strategies were not yet tested in ONDC version of Clondike system and it is a future research topic to do so. Obviously. the task is kept running locally. 2) Else find the least loaded remote node. details can be found in [23]. it is usable only in a smallest scale and only for scheduling of sufficiently independent processes (i. The first attack is technically easy to prevent. In some special cases the results of remote execution can be quickly verified on the core node. none the less it served well for system testing as will be demonstrated in the Performance section. He can then use this credit for execution of his tasks. Second. By the ONDC definition. Similarly. that is bellow its accepting threshold and try to emigrate there 3) If no remote node is found. This scheduler is tracking load and a count of cluster tasks running on associated detached nodes and it is trying to level the load of these nodes (including the core node itself. Each user can specify what programs can run on what machine. in ONDC case both machines can be a core node. they are in fact isolated from the cluster environment as much as possible). It make use of the cluster overlapping nature and allows reciprocal computing power sharing. he can alter those processes memory and code. it is much easier task to implement a new scheduler than in other similar systems. it does not accept any new migration requests. if the machine has too many remote jobs running on itself. Such processes may need some co-scheduling [22] techniques to perform well. In non-overlapping cluster case.e. Since the scheduler in Clondike resides completely in userspace. Our approach to this issue is based on deferring the security critical decisions to the cluster users. So when the owner of Beta has credit to execute something on Alfa. the owner of machine acting as a detached node can read anything from memory of cluster processes running on his machine. To use his credit. The processes of cluster users run with the least possible privileges on the detached nodes and thus the detached node is protected against these processes.Market based schedulers seems to be a better candidate for ONDC than for standard non-dedicated clusters. In ONDC. There is no reliable way how to prevent second type of attacks. Since non dedicated clusters span over administrative domain boundaries and they are potentially used in an untrusted network environment. the tasks started locally on detached nodes are not clustered. In . Clondike is currently relying on establishing IPSEC channels between nodes to provide transparent link level security. The scheduling algorithm performed by each node is as follows: 1) If local cluster running tasks 7 count is lower then threshold. The owner of the detached node has superuser privileges so he can do basically everything. it implies two main possible classes of attacks. it does not have any fairness guarantees. The scheduling decision is performed only at the process start time. From the security point of view. A simpler and more straightforward scheduling strategy was used in the existing prototype. with nodes Alfa and Beta. he can execute a local process and that could be transparently migrated to Alfa and use the credit there. but unlike the market strategies. he can use his credit transparently from his machine. not for collection of closely cooperating highly dependent processes like for example a MPI application. Let Alfa be a core node of nondedicated cluster and Beta a detached node. where detached node owner reads or modifies cluster processes running on his machine.

or a process termination. it would be allowed to execute and all accessible files would be deleted. This is a good representative of a harder class of problems. A user on Alfa can trust owner of Beta. The anonymous nodes can as well participate in cluster. For example. The user on Alfa has as well some process that performs some resource demanding. BUILD TIME IN FORMAT MINUTES : SECONDS . 4 2 2 1 Build time 2:13 3:28 3:46 6:43 Seq. but all nodes that were visited during the process execution. we refer to them as anonymous nodes. time 6 7 8 11 stigmata. P ERFORMACE Figure 4. . X. but easily verifiable operations (NP-complete calculations are one possible example). the file system access violation checks must not consider only the node from which they are being performed. he can change it to delete all accessible files. Graph with times when a single compilation was started from the machine Alfa There is a vast amount of cases that could be measured. and Gamma. the users can use them for executing of processes whose results can be either easily verified. however. The user would specify that any process can be migrated to Beta and the Beta can access any file on his file system. Each process is marked by a stigma of all nodes visited and any sensitive operation is consulted against user defined rules and all 8 Table 1 PARAMETERS OF THE TEST MACHINES .addition. An owner of any of visited detached nodes can alter the process being executed. So the processes on the detached nodes must be monitored for a file system access requests and when a violating access is detected an action must be taken. In case the process migrates back to Alfa before executing the deletion code. it will be eventually terminated due to access violation. The user defined restriction has to be obeyed by the scheduler. Thanks to the migration capabilities. cannot always know which files are going to be accessed by a process being migrated to a detached node. or for processes that perform operations with non-sensitive data. the process is terminated. so the user may need to give a write access to some restricted part of the file system to any node. If the process gets to execution of these command on Gamma. since the process may be already altered by the owner of detached node. Graph with times when a single compilation was started from the machine Delta Figure 5. When a non-sensitive process is migrated to untrusted node Gamma. AND THE SEQUENTIAL TIME IS IN SECONDS . The problem can be illustrated again on our 3 nodes cluster example. Moreover. M EMORY IS IN G IGABYTES . We have decided to demonstrate one common possible use case . the owner can modify the process. To illustrate the decisions the user can make. A solution to this (and related) problem is discussed in [24] and a stigmata mechanism is proposed. and the process visit history is not consulted. the applications used were not designed to run in a cluster environment. Beta. so that the result can be written. but trust less (or perhaps do not even know about) machine Gamma. If any of the visited nodes does not have required privilege. as there is relatively big overhead due to a high communication-to-computation ratio. we will use example with 3 nodes Alfa. The scheduler. The process may need to write result somewhere. Name Alfa Beta Gamma Delta Cores 2 2 2 1 Mem. The action can be either rollback to previous process checkpoint. the user can specify the files from his file system that can be access from other nodes. He can then specify that this process can be executed on any node. The last problem in the user defined restrictions enforcement is a direct result of preemptive migration support. taken before migration to the detached node. Migration back to core node is not an option. All nodes for which no specific rules are specified are considered of a same trust level. This means. the process can visit many nodes during its life time.the parallel compilation.

and a theoretical minimum time. Another important value is a sequential portion of the build time . In each bar group. Each test was performed 10 times and the presented values represent the minimum time achieved. as can be seen on Figure 6. but this cannot be so easily cleaned up). Ts denotes the time of compilation on the node that started it. Graph showing overheads of runs corresponding to Figures 5 and 4 both with and without IPSEC. but still quite small (15% in the worst case). the measured times are very good. but the performance most demanding part of the security. The important observation here is that the compilation has lower overhead both with and without the IPSEC. The theoretical minimal time accounts for the sequential time of the calculation. we use standard unmodified gnu tools [25] like make. Figure 7. In addition.Figure 6. Graph with times when each machine simultaneously started one compilation In our demonstration. The first 2 times are clear. the choosen set of results was one with a shortest time of the compilation on the slowest machine. The application being compiled is a Linux kernel. The increasing overhead is due to inefficiencies of current experimental scheduler that cannot effectively use all machines especially in the end of calculation. This is. of this paper to do a detailed analysis of variance (time variance was in range of few seconds). the best time achieved in a cluster with IPSEC. etc. which is a sufficiently large application to benefit from cluster parallelization. The set of participating nodes includes the node that started the compilation. the IPSec [26] based channel security. It is not purpose 9 (1) where St denotes the sequential part of the compilation. Ratio of achieved times to theoretical minimum times calculated this way reflects the overhead of the system (there is still some inherent limitation due to network transfers. The second important observation is that the overhead due to security (represented by IPSEC) is apparent. The key factor for the lower overhead is the fact that less percentage of work is sent to other machines. The overheads are expressed in percents. etc. The first set of performed tests demonstrates a standard non-dedicated cluster functionality of Clondike. however. Figure 4 shows compilation times of a single compilation started from the slowest machine (Delta). using 4 heterogeneous computers. In cases when multiple concurrent compilations were measured. Tests were performed on a realistic platform. there are 3 running times: the best time achieved in the cluster. and Ti denotes the sequential compilation time on node i. it seems that the scheduler can more effectively use other machines while running on Alfa. There are a few noteworthy observations regarding the graph on Figure 4. It is generally hard to compare machines performance. Table 1 lists all key characteristics of the machines used for testing. which is very fast on 64bit platforms. . even though the overhead is increasing with the number of nodes. worst case scenarios. so rather than meaningless frequency numbers. gcc. Clondike does not have all security mechanisms implemented yet.this includes mainly the final linking. based on the runtime observation of the system behavior while compiling. In contrast to Figure 4. First. Quite low security overhead is due to the fact that IPSEC was configured to run the AES encryption. The theoretical optimal time is calculated as follows: St + ((Ts − St )/(Ts )) ∗ (1/( i∈setof participatingnodes 1/Ti )). was used and therefore the performance figures are representative. Figure 5 captures the results of a single compilation started from the fastest machine (Alfa). the table captures the time it takes to build the kernel on each of the machines. All test machines have x86 64bit architecture and are interconnected with a standard 100Mbps Ethernet. an area that requires more research in future. Each group of bars shows compilation times for different cluster configurations. in order to isolate inherent parallelization limitations (due to Amdahl’s law) from inefficiencies of the system itself.

Czech Technical University. pp. New York.org/xpls/abs\ all. 2006. and T. [14] L. DC. Conf. and W. “Tycoon: An implementation of a distributed. L.org/. [5] K.” in SOSP ’05: Proceedings of the twentieth ACM symposium on Operating systems principles. If we divide this number by 4. Dean.” Master’s thesis. Available: http: //ieeexplore. C ONCLUSIONS In this paper. Australia: Australian Computer Society. and E. and H. “Virtual organization support within a grid-wide operating system. Australia. “Boinc: a system for publicresource computing and storage. P. Yu. A.org/.” http://www. “Economically enhanced mosix for market-based scheduling in grid os.edu/.” in Recent advances in parallel virtual machine and message passing interface.com/. and S. [25] “Gnu. USA: ACM. Tvrdik.” IEEE Internet Computing. “Non-dedicated distributed environment: A solution for safe and continuous exploitation of idle cycles. so we can see that this case is even more resource utilization efficient.doi. we have demonstrated feasibility of one possible use case of the architecture.” Multiagent Grid Syst. ACKNOWLEDGEMENTS This research has been supported by the Czech Grant Agency GACR under grant No. L. Yang. 2005. marketbased resource allocation system. Buyya. [13] M.The second set of tests captures an ONDC-specific use case in which all machines start a compilation at the same time. Dec. H. pp.eu/. [7] R. pp. we get an average build time of each of the kernels in this case. Yonezawa. Y. D. [20] M. “Security Architecture for the Internet Protocol.” in ACSW Frontiers ’05: Proceedings of the 2005 Australasian workshop on Grid computing and e-research. Ridge.berkeley. 2005. Koˇt´ l and P. and A. [11] “Mosix. To verify our ideas. F. C.” in Parallel and Distributed Computing and Systems. Available: http://dx. pp. pp. we have defined an extension of existing clustering concepts. F. Anderson. Stava and P. P.org/. P. Winton. that cannot be performed on any standard non-dedicated cluster.” http://www. The best non-secured time measured when starting from Alfa was 1 minute and 2 seconds. Proceedings. USA: IEEE Computer Society. 23. We have argued why we believe that the architecture is going to be even more interesting in the future. Washington. 79–91. In numbers. 2. J.” http://setiathome. 10 [6] C. [12] “Lsf. .” in In Proceedings of the Workshop on Adaptive Grid Middleware. P.” http: //www2. Holzle.2004. Abramson. Capek. [21] L. 698–714. 107–112. Stosser. 57–65. “A simple virtual organisation model and practical implementation. IEEE Aerospace. Novaes. each machine should first do only its compilation and offer resources only after it is done with that. [Online]. Washington. [10] “Openssi. 12. “Coscheduling and multiprogramming level in a non-dedicated cluster.org/. “Web search for a planet: The google cluster architecture.” in Proceedings of the IEEE. vol. [26] S. Cirne. Morin. “Clondike: Linux cluster of non-dedicated workstations. [18] K. 2006.ietf. 2003. 1. A. 2007. 327–336. Rasmusson. Hanzich. Sterling. Hernandez.. Lai. and B. Scheer. and D. 2004.ieee. Tvrd´k. 2. D. Inc. Fifth IEEE/ACM International Workshop on. pp. “The grid economy. NY. There can be many other test combinations even for this simple use case. USA: IEEE Computer Society.platform.org/. [23] M. 2008. Tvrdik. Probably the most important number is the time of a compilation on the slowest machine (Delta). Gine. Applications and Technologies. This is a use-case. [15] “Xtreemos. [Online].informatik. R.mosix. IEEE. Jgou. ˇ [17] J. but we believe the presented results demonstrate clearly enough that parallelization of a ordinary programs can be achieved with acceptable overheads. since it shows the total time spent since the start of the compilations till the end of the last.kerrighed. “Preemptive process migration in a cluster of nondedicated workstations.” http://www. 2005. no. J.” http://www. Venugopal. pp. 22–28. 4–10. Kacer and P.” http://www. E. 445–452. Barak. Zhang. L. [9] “Kerrighed. J. Roisenberg. P. Y. 3. Darlinghurst. “File system security in the environment of non-dedicated computer clusters. 2005.Volume 1. Matthews. Prieto. 1–11. 2005.” in CCGRID ’05: Proceedings of the Fifth IEEE International Symposium on Cluster Computing and the Grid (CCGrid’05) . R EFERENCES [1] D. Huberman. Kent and K. Jornada. 2007. June 2005.” Micro.” in PDCAT ’07: Proceedings of the Eighth International Conference on Parallel and Distributed Computing. Becker. 2005. “Harpy: A virtual machine based approach to high-throughput cluster computing. Luque. and U. Y.xtreemos. pp. C. 20–28. pp. XI. Langr. since only core machines can use the others in that case. DC. “Evaluation of heterogeneous nodes sa ı in a nondedicated cluster. [3] L. ADCOM 2006.” RFC 4301 (Proposed Standard).” http://www. A. Coppola.openssi. pp. Results of this test can be seen on Figure 7: each machine completes its work in about the same or better time than in non-clustered case (the slightly worse times in some cases are only due to random time variations). [16] “Seti. Adar. 2005. scar David Snchez. vol. Schafer. Neumann. “A virtual machine monitor for utilizing non-dedicated clusters. D. Solsona.de/∼ckauhaus/2005/harpy.uni-jena. E. Oyama. 2006. “Protecting non-dedicated cluster environments by marking processes with stigmata. 2004. Amar. International Conference on. Kaneda.” http://www. [22] M. 2003.pdf. 169–182.” in Grid Computing. Kauhaus and A.openmosix. pp.gnu. and P. Tvrdik. [24] M.” in 8th IEEE/ACM Int. Barroso.org/html/rfc4301.” in Advanced Computing and Communications.org/10. [19] R. pp. Northfleet. it would be around 58 seconds for a nonsecured compilation.14 [2] D. pp. Kacer.1109/GRID. [8] M. B. no. “Beowulf: Harnessing the power of parallelism in a pile-of-pcs. vol. J. Seo. 1997.. on Grid Computing. no.jsp?arnumber=1196112 [4] “openmosix. http: //tools. We have discussed its relationship with other architectures and its advantages.” in Proceedings. 574–581. In ONDC. 107–115. C. Merkey. 102/06/0943 and by the research program MSMT 6840770014.

because it is important for many text applications.2009 International Conference on Computer Engineering and Technology To Determine the Weight in a Weighted Sum Method for Domain-Specific Keyword Extraction Wenshuo Liu Wenxin Li Key Laboratory of Machine Perception Peking University Beijing Supertool Internet Technology Co.Ltd Beijing. the part-of. Once we have the weight. It is a weighted sum method and the preparing work focus on finding out the weight. INTRODUCTION Keyword extraction is the process of extracting a few salient words from a certain text and using the words to summarize the content. the weighted sum of the feature vector can be a good choice for the score. This task has been widely studied for a long time in the natural language processing communities. While in traditional methods. keyword extraction involves assigning scores to each candidate words considering various features. Four different features are used: TF×IDF. relative position of first occurrence and chi-square statistics. This paper proposes an approach which will complete some preparing works focusing on exploring the linguistic characteristics of a specific domain. I. In my work.2009. as long as we have a proper weight vector.cn Abstract—Keyword extraction has been a very traditional topic in Natural Language Processing. This article is primarily about constructing a model to learn the weight vector. effectiveness of the proposed approach.136 11 .1109/ICCET. every step must be performed for every document. Experimental results show the Traditional methods focused on efficient algorithms to improve the performance of the task of keyword extraction. This paper presents a method emphasizing on doing enough and effective preparing works. multiplication and sort. lwx}@pku.speech (PoS) tag. Domain-specific keyword extraction came into sight when researchers found out fully exploiting domain-specific information can greatly improve the performance of this task. which are quite simple for modern computer. This part can be completed once and for all and thus reduce the burden in the real extraction process. in order to simplify the real extraction process. However. China {lwshuo. for example in web-based system. most methods have been too complicated and slow to be applied in real applications. Some experiments show that. the extraction can be completed by addition. the weight extraction part in my work is finished once and for all. sort the candidates according to the score and choose the few top ones. The model is very much like a perceptron. The weight vector is mainly the domain-specific information we here need to explore. 978-0-7695-3521-0/09 $25. The difference is illustrated in Figure 1.edu.00 © 2009 Crown Copyright DOI 10. such as document retrieval. so a certain weight vector working well in one domain might be totally ineffective in another domain. Different domains have different characteristics in the usage of word.

when we consider noun: Probabilistic methods and machine learning have been widely used in the task of keyword extraction. have been stored in a stop list and removed during pre-processing. In this article. For example. C. Peter D. each candidate words will be represented as a four-dimension feature vectors. Turney (1999) developed the system GenEx. DATA REPRESENTATION PoS(noun)= manually assigned keywords which are noun manually assigned keywords (2) The results are numbers between 0 and 1 and they indicate which kinds of words are more likely to be keywords in the target domain. We count the occurrences for each kind of PoS tag as manually assigned keywords in the whole corpus and then divide by the total number of keywords. Web page articles. This feature is calculated as the number of words that precede its first appearance. It is designed to measure how specific a term T is to a certain document D: 12 RPFO(T. and then divided by the document’s length.Witten. For example. For any document. word-to-word. Relative Position of First Occurrence Not only the occurrence. the vast majority turns out to be nouns. Xiaojun Wan. such as “的”. Words which are so common that they have no differentiating ability. and Craig G. considering term T in document D: The input document will be first split up to get separate terms. keywords might always be people’s names. But in sports field. Those articles all have manually assigned keywords for the model to learn. Gordon W. RELATED WORK TF×IDF (T. KEA was proved to be equally effective compared with GenEx. and the IDF by counting the number of documents in the corpus in a specific domain. are shown to contain more relevant terms. Terms occurring in. called KEA. A. Graph-based algorithms have also been explored. I choose four attributes. However. But there are still differences between different domains. which are classified into suitable domains. Jianwu Yang and Jianguo Xiao (2007) proposed an iterative reinforcement approach to simultaneously finishing the task of keywords extraction and document summarization. and sentence-toword relationship. Their approaches constructed the prediction models from texts with manually assigned keywords. III. headlines and in sentences at certain positions.D) = the position of first appearance the length of the document (3) The result is a number between 0 and 1 and indicates the proportion of the document preceding the term’s first appearance. Eibe Frank. TF×IDF TF×IDF combines term frequency (TF) and inverse document frequency (IDF). are used for training and testing. B. to form a feature vector for each candidate keywords. for example. It is their attributes that matter. which are nouns.II. such as in the first sentence of paragraphs. Carl Gutwin. Hulth (2004a) and Hulth (2004c) presented approach using supervised machine learning. The system exploited genetic algorithm and used to be the state of the art. D) = P[term in D is T] × log P[T in a Document]. but also the location of the terms is important. the terms themselves are useless. . Nevill-Manning (1999) described a simple procedure. (1) The TF is measured by counting the times that term T occurs in document D. as mentioned above. in entertainment news. Their approach fully exploited the sentence-to-sentence. and even outperformed when fully exploiting domain-specific information. Paynter. which was based on Naïve Bayes. Ian H. POS When inspecting manually assigned keywords. verbs are also quite important. For example.

we examine the difference between keywords and non-keywords.mop. first on each dimension we examine the average value of keywords and non-keywords respectively. TRAINNING MODEL β β PoS, β FirstOccurence, β Chi ) for update. The weight vector is initially set to all zero. We use some learning ideas from perceptron. then term T is negatively relevant to domain D. The results are presented in Table 1. Chinese Lexical Analysis System) to complete word segmentation and PoS tagging. namely (0. 22. n11 indicates the times that T occurs in domain D. We used the Chinese lexical analysis system ICTCLAS (Institute of Computing Technology. Then we scraped 2563 web pages from http://tech. it is a weighted sum method and finally in the real extraction task all have to be done are multiplication. In the following experiments we choose 7 words with the highest scores as the keywords. 6. In order to get the weighted sum of the fourdimension feature vector. If n11×n22-n12×n21 > 0. then term T is positively relevant to domain D.8 keywords are manually assigned per text. and divide the results by the sum of all candidate keywords. We can find the weight manually and actually that is the inspiration of this work. IV. 4. In every article. 0). The whole text and the keywords manually assigned in the meta keywords tag were extracted. we still miss the weight vector. the more dependent term T is on the domain D. 1563 of them were used for training and 1000 for testing. P refers to the proportion of automatically selected keywords which are also manually assigned. So after the nth article. The higher this value is. recall(R).D. Apparently the more the feature can discriminate between keywords and non-keywords.250. which shows: precision(P). The extracted 7 words are compared with the manually assigned keywords. the higher weight it should be assigned. So far we have talked about how candidate keywords are generated and represented. n21 indicates the times that terms other than T occur in domain D. The weight vector we get for this IT domain is (66. have a weight vector. On average. Chi-Square Statistic For term T and Domain D.76. But doing it manually is too time-consuming if we try to determine weight vectors for many domains. The chisquare statistic is used to test the dependence between a term and a domain. V. We EXPERIMENTS AND EVALUATION and update the weight vector according to the difference. D) = (n11 × n22 − n12 × n21 ) (n11 + n12 + n21 + n22 ) × (n11 + n12 )(n21 + n22 )(n11 + n21 )(n12 + n22 ) (4) β TF×IDF = E (keywords 'TF × IDF ) ∑ TF × IDF E (non − keywords ' TF × IDF ) − ∑ TF × IDF After similar calculation we have a vector as ( β TF×IDF, (5) In the equation. And finally we As the title suggested. 8. we update the weight vector ω by: ωn+1 = ωn +β (6) Thus with every article. when we consider the feature TF×IDF: CHI (T . calculate the difference between them. and if n11×n22-n12×n21 < 0.com/. n22 indicates the times that terms other than T occur in domains other than D. This system is developed by Chinese Academy of Science. and F-measure(F) . We need weights because the four features have different discriminating ability. 0. All the articles in the training corpus are with manually assigned keywords. For example.202). R refers to 13 . All texts are about information technology (IT). addition and sort. 0.59. n22 indicates the times that T occurs in domains other than D. chi-square statistic is defined as: use the result to update the weight by adding the result to it.

449 0.the proportion of manually assigned keywords selected by this method. are not fully exploited. A language model approach to keyphrase extraction. 2004c. I would especially like to take the opportunity to thank professor Von-Wun Soo. CONCLUSION AND FUTURE WORKS [3] [2] In this article. so we manually refined 200 documents. July 2003. we explored a new method on domainspecific keyword extraction. If we denote: ASMA = the number of terms both automatically selected and manually assigned the weighted sums.D. Enhancing linguistically oriented automatic keyword extraction. R.554 0. pp 537-544. And experiments show TABLE I. Moreover. Sydney.732 0.. John Benjamins. Sapporo. and thus simplifies the real extraction task. June 2007 Anette Hulth and Beáta B. I also want to thank Chun-Tsung Endowment Fund for (8) giving me a chance to take part in real research. thesis. If used properly. Improved automatic keyword extraction given more linguistic knowledge. we found out that the keywords manually assigned are not all reliable.). 2006. It is usually used as a standard information retrieval metric: F= 2× P× R P+R After observation. there are still much to improve.216-223. REFERENCE [1] R= F-measure combines precision and recall. document retrieval.496 0. In the proceedings of the Human Language Technology conference/North American Chapter of the Association for Computational Linguistics Annual Meeting (HLT/NAACL 2004). N. 2004a. In proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP’03). VI. Current Issues in Linguistic Theory (CILT). 2003. Table 1compared the performance of our method on different data set. In the proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association of Computational Linguistics. However. 14 Xiaojun Wan. and Mitkov. In proceedings of the 45th Annual Meeting of the Association of Computational Linguistics (ACL’07). Stockholm University. 100 of them are used for training and others for testing. this method can perform quite well. pp.644 0. Megyesi. The second experiment shows a significant increase. pp. 2004b.. THE PERFORMANCE P raw data refined data R F [6] [5] [4] 0. who has been so kind and given me a lot of valuable instructions while I was in National (9) Tsing Hua University. In: Nicolov. 552-559. Recent Advances in Natural Language Processing (III). Towards an iterative reinforcement approach for simultaneous document summarization and keyword extraction. Jianwu Yang and Jianguo Xiao. A study on automatically extracted keywords in text categorization. (Ph. Takashi Tomokiyo and Matthew Hurst. Anette Hulth. Reducing false positives by expert combination in automatic keyword indexing. July 2006 Anette Hulth. 2003. pp. (eds. Botcheva. . Boston. and other natural language processing tasks AS = the number of terms automatically selected MA = the number of terms manually assigned Then P and R are defined as: ACKNOWLEDGMENT I’d like to thank Minghui Wu. 2007. This method focuses on doing enough and effective preparing works to explore the linguistic characteristics of a specific domain. This method relies heavily on the data. G. In proceedings of the 2003 Conference on Empirical Methods on Natural Language Processing (EMNLP’03). Sapporo. May 2004 Anette Hulth. Prague. Xuan Zhao.) Department of Computer and Systems Sciences.685 [7] that it did lead to a better performance. which are used to rank the candidate keywords. Hao Xu.Ltd for all the help and suggestions they have provided. We still can not make sure the weight vector we get is the optimum solution. P= R is defined as: ASMA AS ASMA MA Songtao Chi and Chaoxu Zhang from Beijing Supertool (7) Internet Technology Co. K. Angelova.. Combining Machine Learning and Natural Language Processing for Automatic Keyword Extraction. they might well benefit text categorization. July 2003. Anette Hulth. 367–376. As long as the data is reliable..

and Craig G. Sighan workshop ACL’03. Turney. News Oriented automatic Chinese keyword indexing. Institute for Information Technology. Wang Houfeng. 2000. National Research Council. 1999. Ian H. Domainspecific keyphrase extraction. J. 668-673. Gordon W. Paynter. Learning to extract keyphrase from text. pp. 2003. NRC 44105 [10] Eibe Frank. Technical Report ERB-1057.Li sujian. Carl Gutwin. 1999. Sapporo. July 2003. Information Retrieval. 1999 [8] 15 . Yu shiwen and Xin Chengsheng. pp. Nevill-Manning. [11] Peter D.303-336. In the proceedings of the 16th International Joint Conference on Artificial Intelligence (IJCAI’99). Learning algorithms for keyphrase extraction. Turney.Witten. [9] Peter D.

Ambler observed that “although the UML is in fact quite robust. The claims in this paper. Evermann and Wand [10]. Furthermore. as opposed to the engineering content behind them. On the other hand. An information system (IS) represents a real business establishment. Scott W. however. We claim that FM provides conceptually a “better” description. Strongly claimed Conceptual model Weakly claimed (real world domain descriptions) FM UML Semantically extended UML FM (Needs further research) FM + UML Information system design and software modeling Figure 1. software system analysis I. object-oriented methods and languages (e.. In this paper.” then a second phase of design of the software system. Consequently.g. Works that extend UML for use in conceptual modelling claim that UML can be used in both systems analysis and system design. the paper argues that enhancing UML to incorporate the semantics necessary for conceptual modeling will not lead to achieving the desired results. “production of UML diagrams. For describing the software system. Dussart et al. This later phase typically utilizes object-oriented techniques to model the software system. 1. while the real-world domain deals with things and properties.2009. but rather that UML is inherently an unsuitable tool for use in conceptual modeling.00 © 2009 IEEE DOI 10. Opdahl and Henderson-Sellers [12]). UML. i. the reality is that it isn’t sufficient for your modeling needs” [3]. According to Evermann and Wand [9]. 978-0-7695-3521-0/09 $25. FM has not yet been formalized. UML has been developed with the specific intention of designing and describing software.g. UML] possess no real-world business or organizational meaning.2009 International Conference on Computer Engineering and Technology Flow-based Description of Conceptual and Design Levels Sabah Al-Fedaghi Computer Engineering Department Kuwait University Kuwait sabah@alfedaghi. It is known that UML lacks constructs to facilitate rich conceptual modeling. This paper concentrates on UML as the most widely used language for these purposes.g. object-oriented techniques or their semantic extensions are used. However. UML) are typically used. the IS design phase is concerned with describing the information system through design models intended to describe the software system. INTRODUCTION Understanding and communication processes in the business and organizational domain are an essential aspect of development of information systems.” Researchers have examined and some proposed extending the use of object-oriented software design languages such as UML (e.1109/ICCET. These claims are illustrated in Fig. For the system analysis process that produces the conceptual description.. Instead we present a new conceptual model called the flow model (FM). “UML is suitable for conceptual modelling but the modeller must take special care not to confuse software aspects with aspects of the real world being modelled. its features suggest that it is also appropriate for the system design phase. Keywords-conceptual modeling. Conceptual modeling is a technique that represents the “real world domain” and is used in this understanding and communication. through development of a conceptual modeling methodology based on the notion of flow. conceptual modeling languages are intended specifically for describing the application domain.130 16 .e.. He then suggested using “UML as a base collection of modeling techniques which you then supplement with other techniques to meet your project’s unique needs” [3]. UML is an important tool in both of these processes. it is unclear what the constructs of such languages mean in terms of the business” [9]. UML is used as an example of an object-oriented modeling language and methodology. There are many proposed UML extensions to enhance its capabilities through incorporating semantic features for conceptual modelling. In contrast. building IS begins by drawing a picture of the business establishment as part of a real-world domain. It proposes an alternative to extending UML. The resultant model serves as a guide for the subsequent information systems design phase that involves description of the software system under development. thus it eases the transition between these two stages..” The problem with this approach is “that such languages [e.com Abstract—Building an Information System involves a first phase of conceptual modeling of the “real world domain. The Objectoriented IS design domain deals with objects and attributes. It should reflect the reality of an organization and its operations. A conceptual model is a picture describing this real world domain independent of any aspects of information technology. is the single most important activity in the software-development life cycle. According to Bell [4]. This flow model provides a conceptually uniform applied description that is appropriate for the system analysis phase. This paper argues that it is not a matter of lacking appropriate conceptual structures. [7].

Shop Sell product Customer Customer 17 .g. we compare UML and FM description. one for the secretary. 4.” i. It is stored. The first five states of information form the main stages of the stream of flow.e. as follows: 1. however. 3.g. transferred. 8.g. Suppose that a small organization has a manager and a secretary. etc. The storage and uses/actions sub-stages (called gateways) can be found in any of the five stages. materials (e. translated. the FBI sends its agents to arrest the spy who wrote the encoded message). Information is destroyed. created.. it is generated as a new piece of information using different methods such as data mining). it arrives at a new sphere. consider that a person in Barcelona uses the Internet to ask a person in New York whether it is raining in New York.e. Information is used (i. Typical UML graph. States of information. The gateway in his/her information system transforms the information flow to an action flow.. it comprises nine information schemes. analogous to police rushing to a criminal’s hideout after receiving an informant’s tip). in the release and transfer stages. Fig. the query flows to the receiving stage of the person in New York. 7. “Things that flow” include information. we can start at any point in the stream. it includes new illustrations of the model. Information is stored. information is not usually subject to these sub-stages.g.. information is not a patient. UML VS. manufacturing). Information is received (i.. 5. It is processed in such a way that it generates implied information (e. Additionally. 2. 6. 4. To follow the information as it moves along different paths. from a customer’s sphere to a retailer’s sphere). A flow model is a uniform method to represent things that “flow... 5. it remains in a stable state without change until it is brought back to the stream of flow again. processed. We call this point a gateway in the flow. Created Processed Disclosed Received Figure 2. as illustrated in Fig. First. III. money. like passengers ready to depart from airports). The New Yorker’s reception of the query triggers an action such as physical movement to look outside to check whether it is currently raining. one for the organization at large... The five-stage scheme can be applied to individuals and to organizations. It is reusable because a copy of it is assigned to each agent. 2. it is in a sub-stage because it occurs at different stages: when information is created (stored created information). The five information states are the only possible “existence” patterns in the stream of information. Information is processed (i. To simplify the review of FM. mined). one for the manager. 2. Suppose that information enters the processing stage. and creation stages without loss of generality. Information is created (i. 3. Using information indicates exiting the information flow to enter another type of flow such as actions. upon decoding or processing the information. ready to move outside the current sphere. and so forth. Information is disclosed/released (i. like passengers arriving at an airport). When information is stored. it has two departments with two employees in each department. are exchanged. The following are ultimate possibilities for the information: 1. e. Information is transferred (disclosed) to another sphere. It is used to generate some action (e. compressed. It is processed in such a way that it generates new information (e.II. where it is subjected to some process.g. It is disclosed and transferred to another sphere. Even though this section is simply a review the basic model. so we apply these sub-stages only to the receiving. processed (stored processed information). THE FLOW MODEL The flow model (FM) was first introduced by Al-Fedaghi [2] and has been used since then in several applications such as information requirements [1].e. we introduce flow in terms of information flow. FM To illustrate the advantages of FM-based conceptual modeling. The patient is a term that refers to the thing that receives the action. Communicated Vender Figure 3. and communicated. a is the father of b and b is the father of c generates the information that a is the grandfather of c). processing. To illustrate the “gateway” sub-stage.e. it is subjected to some type of process. It is destroyed.e. 3 shows a typical example of an UML graph [13]..g. Thus. Information goes through a sequence of states as it moves through stages of its lifecycle.. one for department 1.. 6. it is utilized in some action. or received (stored received/row information).e. In the uses substage. The discussion will include some important features of FM. Therefore.. comparing certain statistics generates the information that Smith is a risk). e. it is designated as released information.

a machine that reads health data of a person. For example. creates the product. 5 shows the resultant flows in the scenario. In UML. create. The usual situation is that every thing that is received. However. this in an ambiguous connection. Nevertheless. Example: Imagine the following scenario: . Shop Customer Creation Processing Disclosure Creation Vender Creation Processing Processing Sell product Cust. For example. the kind of connection is not clear. A system that reads data. The shop’s description seems to be incomplete. Still. Conceptually. FM description that corresponds to figure 3.g. Fig.The “customer support” module sends a request to the vendor to deliver the product to the customer. and/or money. The point here is that the semantics of the “lines” connecting the Shop to Customers and Vendors is ambiguous. . in the general case. release. Consider the following hypothetical situations: 1. order. and money. processes it. personnel. a time clock that scans employees’ fingerprints then searches in a fingerprint database to allow entrance.The vendor receives the request. They are connected to the shop indirectly through having the two use cases in the Shop. 4 shows an FM description that corresponds to Fig. Additionally. created. Each of these flows is represented by its own flow in the actual conceptual. 3 are modeled as black boxes without interiors. money. A system that reads information and processes it. Apparently. support Receiving Communication 2 Communication 6 Customer’s Receiving physical 9 sphere Transport Vender’s physical Manufacturing sphere 8 Transportation Releasing Figure 5. “customer support. Fig. a time clock hat reads information on ID card and registers the ID. The Shop is an active entity in the interaction. The vendor-“Sell product–Shop” connection may also denote these flows. . released. The actors in the UML description of Fig. materials. created.The “sell order” module sends the request to another processing module. “Sell product” and “Customer support” are processes in the Shop.The “sell product” module in the shop receives the order and processes it. For example. the “Sell product–Customer support” connection indicates flow of information (e. response to order). process.” If we replace shop with “Market.” to deliver the product to the customer. There is apparent materials (products) flow in customer-“Customer support” and vendor““Customer support. 18 . The “flowing things” are information. . The following example shows specific types of flows.” It may be argued that “Sell products” is that part of the Shop that interacts with the customers.The figure includes two use cases: Sell product and Customer support. For example. compares it with tables to decide whether the person is fit for a certain job or not. the two actors seem to be directly connected to the interior of Shop. Shop’s informational Customer’s information sphere Disclosure Creation 5 Vender’s information sphere Processing Creation Sell product Processing 7 Receiving 1 Disclosure 3 4 Customer. These three situations are represented in UML by an actor. and communicated by the two use cases is first received. and communicated by the Shop. each may necessitate different requirement that should be understood at this level by all participants. 4 is drawn in a very general way to illustrate different directions of flow. and communicate (transport) these flowing things. Shop and the actors are spheres. Spheres can receive. processed. 3. etc. generating personal information may be governed by privacy rules. then produces new information. Example FM with two types of flow. A system that registers only information.. The “Shop” is different from “Sell products” and “Customer support. 3. In FM. processed.The customer creates an order for a product. products. the “global sphere” of the Shop may or may not allow this direct interaction.” then the semantics of the connections represent everything: eggs. 2. a use case. support Receiving Receiving Disclosure Receiving Disclosure Communication Communication Communication Figure 4. and delivers it to the customer. Fig. released. .

Notice that FM allows interaction between different spheres. the notion of actors is introduced for lack of a common notion that covers systems. This conceptual generalization eliminates a great deal of UML technicalities but also allows more expressive power. All are “outside. regardless whether spheres (e. Then.In Fig. In FM... No spheres means no events. there are two types of flows: information flow (orders and requests) and physical products flow. boxes). This is in line with Occam's razor. Finally. stick figures and 3-D elements make UML appear jarring. Actors and the systems are simply spheres of information that interact (hence communicate) with each other. where one should not make more assumptions than the minimum needed. again. At circle 1 in the figure the customer creates an order..” but interact with each other. For example. the customer’s transporting action (e. actors) are in the interior or exterior of any sphere (e. system).com. The direction of arrows in FM has one meaning: flow of information from one sphere to another. This happens when the controlling sphere allows the flow of information to the controlled sphere. Simply. In UML. spheres just like any other spheres. then both are actors. it represents any type of interaction topology. It flows to “sell order” in the shop’s processing stage (circle 3). the FM representation is simple because of its repeated application of the five-stages schema and the uniform application of a single flow in each sphere. The FM approach is suitable for describing generalizations (e. More effort could have been made to construct a uniform and aesthetically pleasing representation” (Reference. Such a scenario is understood by the customer. but the sweeping generalization of notions accompanied by zooming in on interiors of spheres promises abundant modeling benefits. When two systems interact with each others. Controllability in UML refers to the interior processes of an outsider that the system cannot control.g. The system acts on actors and actors act on the systems.. Actors and Spheres In UML. and other information spheres. a sphere does not “turn on” until the flow reaches it (it receives information). An information sphere (as an example of a sphere) <includes> information spheres regardless whether they are a system’s information spheres or not. 2008). more uniform) conceptual tool than UML. there is a uniform modeling of interiors and exteriors of spheres.” IV. The only thing that distinguishes “the system” from other actors is that the system is the central actor. This section compares some of their features. UML associations are associations among spheres regardless whether they are systems or not.e. In FM. Additionally. and software developers. The disclosure stage causes the creation of delivery request that is to be released to the vendor (circle 5). going to the post office) allows him/her to receive the product. an actor is an outsider who interacts with the system. 5. Roles (a term used extensively in UML-related works) are different kinds of interactions between two spheres. the shop manager. Interactions among actors are usually minimal and related to their interactions with the system. We claim that the FM description is a “better” (i. It flows from the creation stage to the disclosure stage. there are—most of the time—two sides: computer and applications. Further research may lead to inject some UML constructs into FM.. FM is more general. "navigable association" that indicates one actor controlling or "owning" the interactions can be reflected in FM by the flow of information. Conceptually. which triggers (circle 7) actual actions in the vendor’s physical sphere. In computer science. inheritance. This is especially important for a network environment. from informational flow to actions flow. However. A.g. In FM the basic concept is sphere. the flow of information may eliminate the specification of certain kinds of associations among UML actors. these interactions cannot be represented in the same diagram as the system. The “sell order” module sends to the “customer support” module to initiate a request to deliver the product (circle 4). since we can now model the interior flow (stages) of all assemblies. UML mixes information spheres with such notions as events. It is observed that “ad hoc mixing of abstract notation (2D ovals. We will not discuss these details. thus from the functional point of view they are all interacting entities. This is not a central concern in FM. Even though the FM model appears to be an “extensive” description in comparison with the UML description. Actors in this case are abstractions of something common to outsiders.g. In this last sphere the product is created (manufactured) and transported to the customer (circle 8). these are. The triggered flow does not necessarily start from the communication/transporting stage. Every actor interacts with the system. it flows to its processing stage. this makes the concept of actors very confusing. The figure shows the remaining three spheres where the creation stage is omitted because it is unused. analysts. In FM there is no necessity for differentiation between actors and the system. specialization. Our weaker claim is that FM can be complemented with some UML structures such as procedural control flow inside the modules “sell product” and “customer support. Distinguishing between the system and the actors is suitable for computer developers. Secondly. then to the communication stage. In FM. Notice that the dotted arrow indicates change in the type of flow. COMPARISONS OF SOME FEATURES FM differs from UML in its basic structure and components. more complete. In UML we may also introduce artificial actors such as those representing common behavior shared by a number of different actors. Similarly. 19 . events are products of spheres. human beings. or subclassing) as structural relationships among actors. interaction between the system and actors embeds controllability of interaction from either side. “Outside” here refers to controllability (the system cannot control the outsider). The order crosses to the shop’s informational stage at circle 2. The five stages schema in FM provides such a general notion.g. The delivery request flows to the vendor’s informational sphere (circle 6).

It is loaded with technical consideration. Internal behavior within the system should not appear in the top system level. Order 1 Customer Name Address creditRating():string Personal customer CreditCard# Customer Sales rep 0. and the server it contacts (From [6]). rather they are “communication. The Diagrams tell what the system should specify and facilitate requirement specifications and communication among users. this section discusses some proposals to enrich UML with semantics. signaling. Ontologically. It is important to notice that use cases are not (internal) design tools. there is nothing in the “real world domain” called “order. B.1 Employee Server Receiving Disclosure Communication Communication Receiving Disclosure dateReceived is Prepaid number:String price:Money Line items Order Line Quantity:integer Price:money IsSatistied:boolean 1 Communication Disclosure Receiving Communication Disclosure Processing Receiving Coperate Customer ContactName creditRating creditLimit Remind() billForMonth(Integer) 1 Product Processing Creation Figure 8.. used by [9]) User Creation Administrato Figure 7. More importantly. objecting.” It is an act of some agent. It is supposed to stand for a relationship between them. and the server it contacts.. In UML. In FM. and developers. UML does not allow drawing interaction lines between the actors because it is not clear what the interactions mean. Use cases Use Case Diagrams are descriptions of the relationships between a group of Use Cases and participants (e. There are many extensions of UML in this direction. the description is not suitable for conceptual models.” The proposed semantics are used to derive modeling rules and guidelines on use of object oriented languages for business domain modeling. What type of relationship can exist between a customer (a person or corporation) and “order”? Apparently. it employs heterogeneous notions. Fig. 7 shows different FM information spheres and possible flow of information among them. Browser Processing Start/Stop Serve Page Set Target URL Processing Receive Page Instead of UML. it is easy to identify Use Cases of outside spheres through identifying where they cross the sphere’s boundary. consequently. Consider the edge between Customer and order. Evermann and Wand [9] proposed “to extend the use of object-oriented design languages [such as UML] into conceptual modeling for information system analysis. we will concentrate on works that constrain UML constructs in order to align it with conceptual modeling. expressing. The status of common Use Cases (those used by parts of the system and actors) is not clear. Example UML class diagram without ontological semantics (from [11]. Ordering. Use Cases can <<include>> Use Cases. Use Cases are supposed to be top-level service that the system provides to its actors. Here we achieve simplification in addition to preserving identification of Use Cases.” meaning they specify requirements and features as they are commonly understood by participants including users. V. “order” means a request to purchase something. to achieve some depth in making comparisons with FM methodology. at the least we would draw two diagrams.g. Thus. Interactions among several infospheres 20 . Clearly. 8 shows an example of UML class diagram (from Evermann and Wand [9]). actors who interact with the system) in the process. etc. and developers. managers. have a similar ontological nature: acts of an agent. requesting. customer. a Web browser. a web browser. 6 shows the resultant description. It depicts a situation typically found in object-oriented models. UML WITH SEMANTICS Example: Suppose we want to diagram the interactions between a user.Administrator Server Browser Start/Stop Serve Page Browser Set Target URL User Server Serve Page Figure 6. The interactions between a user. demanding. Fig. Fig.

The black boxes represent the customers. 21 . A property is defined in FM as something that flows in entities. We assume that the company has two units: one to control the queue of orders (the Order Line box) and one to deliver products (the Product box). which in turn triggers (circle 4) the Product sphere to release the product (circle 5).” as shown in Fig. musculoskeletal system. an actor is an outsider who interacts with the system. we tried to stick to what is expressed by the UML model. which is different from its money sphere. According to our understanding of the case. So is having the order stand side by side with its actor (e. FM description of the system in Figure 8. the corporate customer’s sphere contains the employee’s sphere. transfer. receive. etc. a business information sphere—here. orders and products are “things that flow. “Order” is a type of “things that flow.. creator. 9. nervous system.What is the relationship between an agent and its acts? The answer is “acts flow out of agents. in UML. The FM description can include a great deal more details. and obtaining oxygen is not the business of the digestive system. and communicate acts. Notice that each entity in FM has multiple spheres. Hence. the unused internal arrows have been removed. processor) conceptually disturbing? The best we can do is to explain the edge between Customer and Order as actor/actee correspondence. we do not mean computer information system denoted previously as IS—is different from its physical sphere. The dotted arrow in the figure denotes a change in the type of flow from “flow of orders” to “flow of products. The flow from these sources of orders are received by the company (circles 1 and 2). just as a human being has several “systems”: digestive system.” As mentioned previously.” To simplify the figure. In Fig.g. etc. however. Similarly. Orders are also created for individual customers. 9. In FM. Food does not flow in the nervous system. the arrows (except in the product sphere) represent the flow of orders. process. Receiving Transporting 1 Company Transporting 2 Transporting Receiving Processing Processing Receiving Releasing Processing Creation 3 Releasing Creation Personal customer Releasing Creation Corporate customer Receiving Transporting 4 Creation Releasing Receiving Transporting Employee Processing Releasing Creation Receiving Releasing Processing Creation Order Line Releasing Processing Transporting Product 5 Figure 9. orders are created in the corporate sphere either by the corporation itself or by some of its employees.” in the sense that agents create. The company divert the flow to the Order Line sphere (circle 3).

However. S. Ontology based objectoriented domain modelling: fundamental concepts. and Henderson-Sellers. (2000). July 28 . S. Requirements Eng 10: 146–160. (2002). 2008. http://www. REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] Al-Fedaghi. While such efforts are commendable. VI.August 1. Dordrecht. (eds. It is also suitable for software developers. Y. In: Rosemann. CONCLUSION International Conference on Conceptual Modeling. M. editors. and Wand. Towards ontologically based semantics for UML constructs.nz/~jevermann/EvermannWandRE05 . The flow notion is familiar in analysis.). Reidel Publishing Company. extensions of UML are proposed for conceptual modeling. such as the circular flow model used by economists and the supply chains used in operation research. United States Military Academy. Reading. A. Ontological evaluation of the UML using the Bunge-Wand-Weber model. [13] Reference. and Green. J. Vol.. Evermann. Jajodia.. West Point.com/essays/realisticUML. 1(1). Al-Fedaghi.andrew. The flow model introduced in this paper is drastically different in nature from UML. S. A. (2005). Nov. Death by UML Fever. (2001). Kunii. 3.vuw.. accessed March 2008. IEEE 32nd Annual International Computer Software and Applications Conference (IEEE COMPSAC 2008). Thinking Ontologically {Conceptual versus Design Models in UML.agilemodeling. Bunge. D. and Wand. Yokohama. Evermann. (2002). “Some aspects of personal information theory. Aubert. In addition. An evaluation of inter-organizational workow modeling formalisms. Software and Systems Modeling.htm Bell. http://www. M. 2006. 43-67. (2005). Japan. Addison-Wesley.html Dussart. 27-30.cmu. (2004. (2007). 1 – March. M. Proceedings of the 20th 22 . NY. this should not hinder the researchers from seeking fundamentally different approaches. Working paper. P. Be Realistic About the UML: It's Simply Not Sufficient http://www. Idea Group Publishing.) Ontologies and Business Analysis.ac. 9 is a conceptualization that is suitable for both systems analysts and software developers. Finland. Vol.reference. http://www.pdf Evermann. 2.” 7th Annual IEEE Information Assurance Workshop. Unified Modeling Language.We claim that the FM model as described in Fig. Communication of the ACM. A. S. and Patry. E. J.. 2001. Ecole des Haute Etudes Commercials Montreal.com/browse/wiki/Unified_Modeling_La nguage This paper has examined the problem of building a conceptual model of “real world domain” as part of the process of developing a business information system. In H. Solvberg. typically based on UML as an object-oriented modeling language. where many flow-based models are used. J. A.edu/course/90-754/umlucdfaq. Turku. A. Carnegie Mellon University (accessed March 2008). Y. 2008). W.com (accessed March. Ambler S. where the control flow is a basic concept in computer science. “Software Engineering Interpretation of Information Processing Regulations”. B.. and Kendall. (1977) Ontology I: the furniture of the world. UML distilled: A brief guide to the standard object-oriented modelling language. UML Use Case Diagrams: Tips and FAQ. 82-104. For this purpose object-oriented techniques are utilized.mcs. No. B. Holland. [11] Fowler M. MA [12] Opdahl. and A. there is a possibility that it can complement and be complemented by UML in developing new methodologies for conceptual modeling.

A new method to construct Bucket index for numeric data is proposed and bloom filter compression algorithm is used for the character string. Database encryption. enterprise databases hosts much important data. then convert the bloom filter into a numeric data. which support the range query for the numeric data.. Bijit Hore further optimized the bucket index method on how to partition the bucket to get the trade between the security and query performance [6] . Paper [9] had proposed characteristics matrix to express string and the matrix will also be compressed into a binary string as index. Experiment results show the performance of our scheme has improved. When data is encrypted. Then in [4] they proposed two-phase query.1109/ICCET. is an effective way to protect data integrality and privacy. it is large and will lead to much computation. in which constructing index for encrypted data is the key to improve performance. Zhengfei Wang proposed pairs coding function to support fuzzy query over the encrypted character data [7] [8] . Encrypt storage model R( A1 . which will be saved in database as a numeric data. and the other of which decrypts the encrypted result and does a refined query. A compromise solution between . how to query encrypted data becomes important and challenging. with flexible choice of encrypted fields and good efficiency especially for only a few fields sensitive. and complete fuzzy query of characters effectively by using the bloom filter as the encrypted index.. Pairing coding method encodes every adjacent two characters in sequence and converted original string directly to another characteristic string by a hash function. as an active protect mechanism.2009. INTRODUCTION Database is the most important part of information System. constructing different indexes for different data types. III. the column-level encryption minimizes the number of bytes encrypted. Hacigumus et al... We analyzed the false positive probability of bloom filter to get optimal parameters.00 © 2009 IEEE DOI 10. which is not suitable for storage in database. Every character string need a matrix size of 259x256... An ) to represent a database We use relation model. and they have proposed bucket index. We construct different indexes to support different computations for different data types in two phase query. introduced DAS model [3] . while they are threatened by inside and outside attacks. the query performance degrades greatly. We perform range query of numeric by constructing a bucket index. in which database is provided as a service to the client. II. The experimental results shows that the performance of the queries in our 978-0-7695-3521-0/09 $25. avoiding full table decryption. The methods based on index is supported by DBMS. When the data is encrypted. Am . There are also some researches on the fuzzy query of character string. and could perform badly for big character string.edu.com Abstract—The encryption mechanism is an effective way to protect the sensitive data in database from various attacks. Database. 100083 China lz_liu@buaa. Keywords-query. security scheme has been improved compared with the pairs coding method and the traditional method. the first of which returns coarse result by using index. School of Computer Science and Engineering Beijing University of Aeronautics and Astronautics Beijing.cn. we use triple to express a character string as a set. THE ARCHITECTURE OF ENCRYPTED DATABASE A. Our encryption scheme base on column-level. gaijingfen0313@163. RELATED WORK I. Firstly. build bloom filter on it according to the real situation. Many researches adopt two-phase query method. in which the query is completed on the client side and the server side together.2009 International Conference on Computer Engineering and Technology A Method of Query over Encrypted Data in Database Lianzhong Liu and Jingfen Gai Key Laboratory of Beijing Network Technology. in addition. This method can’t deal with singe character. the length of index has come to more than hundred bits. And they combined privacy homomorphism technique to support arithmetic computation [5] . a scheme to support query over encrypted data is proposed. and we classify the data into sensitive and not-sensitive. How to query encrypted data efficiently becomes a challenge.. since a character string’s match is usually not as quick as the match of a numeric data [2] . In this paper.. we extend two-phase framework to complete query.173 23 H. encrypted data. and focused on the query performance at the cost of storage space. We analyze the relation of the parameters that affect the false positive probability. Firstly.

in contrast.performance and security can be achieved by only encrypting the sensitive fields [1] . the application interface on its own side accepts it and transfers it to the E/D Engine.. while date data needs query with “between…and”. and the true value is obscured and safe on the condition that F(x) is secure. in order to get the quotient and residue. Am (1) where the column with upper E stands for the encryption column... which is another virtue of column level encryption. which also decides the query efficiency. decryption function will be called when encrypted data involved. Store Module or Querying Translator deal with it according to SQL type. and there will be many types of queries. mainly for numeric data and date data. and C stands for the index column for character string.... Every time an application puts forward a query. the essential of two phase query is generation and use of index.1. the column with upper S stands for the bucket index column. and then computes every record by the query condition. so we built different types of indexes according to the data type. We build bucket index to support range query. using metadata in Security Dictionary to translate user SQL. and we got the digit number of a numeric data. The query Translator will split an original query over unencrypted relations into: (1) a corresponding query over encrypted relations to run on DBMS. The Store Module will turn the insert SQL into a corresponding one with data encrypted. AiC = Cindex( Ai ) (2) Where we assume Encrypt() is the encrypt function.. numeric data needs equation and range query. Moreover there are many data types in DBMS. AiS = Sindex( Ai ). It is proved easily that a larger v will have a larger IB. We build different types of index to support different computations. ArS . see details in [4] . we have an application listener for listening request come from every application interface on the E/D engine side like setting a server proxy. In order to fasten the query of encrypted data. Architecture of storage and query of encrypted data The construction of index should follow two principles. this characteristic will support range query for numeric data. v−r IB = digitNum(v) || (3) seed Step 2: In order to enhance the security of the bucket number. character data need equation and fuzzy query.. In fact.. CONSTRUCT BUCKET INDEX FOR NUMERIC DATA B. Corresponding encryption relation can be expressed as E E C R S A1 . Sindex() is the bucketIndex() function and Cindex() is the function generating the character index. We take a monotone increasing function (F(x)) to transform IB. The communication between application and the E/D engine depends on interact of To Store Encrypted data Store Module (with encryption) Insert Application interface Select Application/ Web Browser Querying Result Security Dictionary Coarse Querying on Encrypted data Querying Translator DBMS Querying condition Encrypted data Plain text Querying Result Filter With decryption Temp Query Result Encryption/Decryption Engine Figure 1. in this way we have final bucket index. Seed and F(x) are the only ones we need to protect. IV. Different from the bucket index in DAS [4] .. The process of encrypted data Our scheme extends two-phase query framework [4] as shown in Fig. firstly the index filters false record efficiently. we construct order-preserving bucket index for numeric data by a series computations.. we got encrypted column and index column for every sensitive column data. QRF takes the encrypted result as input. we have the IB blurred. output of process is exact query result set. Encrypted column is used for equation query. also a larger F(IB). which also enhance the data security. which will be sent to QRF (Querying Result Filter) as a querying condition. Am .. and then QRF sends the result to application through the secured communication. without storing and accessing the metadata about partition information. ( ) the two proxies. an Encryption/Decryption Engine layer is added between application and DBMS. An .. The application interface is the only thing we provide to applications like setting a client proxy. They are expressed as follows: AiE = Encryt ( Ai ). ) ( 24 . It is not possible an index supporting all computations. and secondly it should be safe enough not to leak the true value. And we construct index using bloom filter for character data to support fuzzy query. For example. The computations are as follows: Step 1: we divide every data with a Seed. (2) a client-query for postprocessing results of the server-query.. So we have IB as the initial bucket number. A1S .

we construct a triple to present a character string. A bloom filter uses k independent random hash functions h1. which will then be stored in the database as index in the form of numeric. r is used to describe the relationship of adjoining characters. u presents the set of characters in s. u . . Bloom filter on the triple A Bloom filter is a simple randomized data structure that answers membership query with no false negative and a small false positive probability [11] . we make index for the sensitive column is o_comment. while the false positive probability depends on multiple factors.…. which can express the characters in the string and the relations of the characters. where n is the number of elements. So if n is large. we have w presenting the set of word split by blank.…. k k ( ) String[] w={w1 String[] r={r1. if all hi(x) are set to 1. For each element s ∈ S . we can say pairs coding is a special case of bloom filter. and f is smallest when k = (ln 2) ⋅ m [12] . CONSTRUCT INDEX FOR CHARACTER STRING Large quantities of string data are used in many applications and fuzzy query is used frequently.a2 s1 s2 si Sn Si h1 h2 hk m-bits array 0 m Figure 2. we will see that it becomes pairs coding function.…. D. The bloom filter built on triple We build bloom filter index on the triple according to the real situation. pairs coding function has n 1⎞ −n ⎛ f = 1 − p = 1 − ⎜1 − ⎟ ≈ 1 − e m . k = 1 (5) ⎝ m⎠ The more (ln 2) ⋅ m is close to 1. using m-bits array for representing a set S={s1. the probability of a not-matched record returned is f NUMBER( matchStr ) . comma and full stop characters. Every element in the u and r will be encoded into multiple positions of the binary string with the tag of ‘1’. then clearly x is not a member of S. n When a query condition is “LIKE %matchStr%”. It is space-efficient. and m is the total bits size of the index. which depends on false positive probability of the bloom filter.V. all bits in the bit array are set to 0. the better of the n false positive probability is. is to check whether all hi(x) are set to 1. 3 where we use MD5 to get k hash functions. By the conclusion above.s2. Now we can convert the LIKE query over encrypted relation to the query over the index attribute. Take ORDERS relation as an example. So we have the larger matchStr is. When k=1. 25 . then … ” … ” a1a2 an … wn} ar} ( ) … Algorithm of index based on bloom filter The bloom filter algorithm we used is as described in Fig. So the triple is comprised of these three. where NUMBER(matchStr) stands for the number of elements that matchStr has. B. we first get the index value of ‘matchStr’. Assume the condition for a query is “WHERE Ai like ‘%matchStr%”. We need to analyze these relations..r2 rn} char[] u={a1. and then convert all the hash positions to numeric as index. A. the bits hi(s) are set to 1 for 1 ≤ i ≤ k . then the condition changed to WHERE AiS &value=value. Triple for character string In this section. Initially. r . a larger m will be needed. In this way. It is defined as follows: Definition 1 Triple for a character string: For a string s=’ c1 c2 …. if not. we think that x is in S with false positive probability [12] . How to execute a query efficiently over the encrypted string data are the emphases of this paper. the smaller of a false record returned probability gets.m-1}. and smaller k might be preferred since they reduce the amount of computation necessary. or else the false positive probability will be high. To check if an element x is in S.sn}. In practice. The probability of a false positive for an element is kn k ⎛ ⎛ − kn 1⎞ ⎞ f= (1 − p ) = ⎜1 − ⎜1 − ⎟ ⎟ ≈ ⎛1 − e m ⎞ (4) ⎜ ⎟ ⎜ ⎝ m⎠ ⎟ ⎝ ⎠ ⎝ ⎠ where a large m and a small n both have a large f. k must be an integer. C. String s= the character string will be mapped to a m-bits array comprised of ‘0’ or ‘1’. a fuzzy query over character data equivalent is a membership query over the bloom filter only by simple & operation of numeric data. cn’.hk with range {0. Analysis of the false positive probability for bloom filter The key of our method is to minimize the false records returned. At last we have t = w. not full-table decryption any more. which is shown in Fig 2.

We construct a bucket index without saving the metadata of every partition. n2 recorded. we get the index value is 480497654 with the same parameters.re tli f moo lb VI. n1 ≥ n2 . We compare the method of pairs coding and bloom filter when k=1.1=k s doh tem tn ere ffi d nee wte b n ois rap moc Figure 3.0 r y 5.gn ido c s ria p d ed net xe 2=k . we use the tool of Benchmark Factory for Database to make our database. and then converted the query over the sensitive attribute into the query over the index attribute in two phase query. we evaluated the performance and efficiency of our two phase query. P4 2. We get fq by executing the fuzzy query over encrypted relation. Comparison of different index methods VII. The bloom filter algorithm Figure 4. Then a converted query condition is bitand(e_comment. 1G RAM and database is Oracle 10g. we take scale=1.0 a l s e q 4 .k=4.0 f a l 2. Also we compare our method to pairs coding method extended by different k and traditional method. We focus on the fuzzy query over encrypted character string.1 f 2 . we use table ORDERS and PART as experimental data source. CONCLUSION n1 − n 2 (7) n1 By the way fq is different from f. while fq can reflect f.redro 3 2 1 1. using bloom filter to check the existence of elements of the Triple. and the number of tuples satisfying the original query is n2. Fig.0 b a b 7. we analyzed the parameters of bloom filter to get the minimal false positive probability. Thirdly. Firstly. f is the probability of an element not in the set being taken as a member.0 let len=2.0 b i l i t y 0 1 : . which enhance the security of the sensitive data. we use a triple to denote a string. and got n1.Fig. length of the Key is 128.0 u e r y 6 .gn ido c sri ap 1 9.0 p r o b a 8 . 480497654)= 480497654 (6) 4 %gnirtS% ekil tnemmoc. then the false query probability is: 3=k .0 i l i 8. Compared with conventional 26 6 e 5% g n i r t s % 4 k i l r e n i a t n o c . noitpyrceD elbaTlluf gnidoc sriap retlif moolb 13=m. and fq is the proportion of get the wrong records. and the sensitive column is O_COMMENT. Finally. Comparison of bloom filter.0 s e 3. The encryption algorithm is DES. When a query is ‘o_comment like ‘%unusual%’’. the result shows the improvement of the bloom filter based index is obvious.0 p r o 6.gn ido c s ria p d ed net xe t pyr ced el ba tll uf 1=k . 5 is query about PART. fq = We proposed a query scheme to query encrypted data efficiently by using different types of index for different data. programming language is Java.66 CPU. We have the following definition of false query probability: Definition 2 We assume the number of tuples returned in the first phase of the query is n1.0 t y 2 . According to TPC-H standard. pairs coding and the traditional method.m=31. which shows the improvement of our method. t r a p2 3 1 0 We leave the efficiency of bucket index out of account of this paper. 4 is the query over ORDERS. THE EXPERIMENT AND THE ANALYSIS Figure 5. Purpose of our experiments is to test the performance of fuzzy query among our method. and the sensitive column is P_CONTAINER. pairs coding and traditional methods 2=k . the environments are windows XP.0 q u e 4.

REFERENCES [1] S. Gao and A. 125-136.F. no. “Fast Query over Encrypted Character Data in Database”. “Executing SQL over encrypted data in the database service provider model. Jin. Mehrotra and G. 1083-1090. Shi. Dai. B. [6] B. Wang and B. 2008. [7] Z. 2002. [8] Z. Jiang. Shi.289-300. 216-227. W. S. Sesay. Bruck. “Weighted Bloom Filter”. [5] [2] [3] [4] H. pp. pp.” In ACM SIGMOD Conference. Zhang. pp. “A Method of Bucket Index over Encrypted Character Data in Database”. Iyer. Li and S. pp. Hacigumus. vol.L. 2004. “A Privacy-Preserving Index for Range Queries”. [12] M. Niu. W. Hacigumus. 186-189. [9] H. [10] Y. Xu. 2007. H. Ohtaki. Hacigumus. J. J. 2005AA113040 and the Co-Funding Project of Beijing Municipal Education Commission under Grant No. 720–731. Mehrotra. Reliability and Security (ARES08). 2004. Z. 2005. Cheng and R. In the Proceedings of the International Conference on Data Engineering (ICDE). ACKNOWLEDGMENT This work is partly supported by the National High-Tech Research and Development Plan of China under grant No. In Proceedings of the 2005 The Fifth International Conference on Computer and Information Technology. 5. 591-595. B. IEEE/ ACM Transactions on Networking. pp. 90-97. B. Tsudik.” Frontier of Computer Science and Technology. 10. pp. JD100060630. “Efficient execution of aggregation queries over encrypted relational databases”. Li and X. pp. “Execution Query over Encrypted Character Strings in Databases. 2002. [11] J. pp. Mitzenmacher. In the proceedings of Database Systems for Advanced Applications (DASFAA). Iyer and S. “Partial Disclosure of Searchable Encrypted Data with Support for Boolean Queries”.queries and the pairs coding method. and S. the experimental results show the performance improved.L. “Providing Database as a Service”. In Proceedings of the 30th VLDB Conference. 27 . Availability. 2005. J. Mehrotra. Yang. Wang and B. Hore. pp. W. 2304-2308. Intelligent Information Hiding and Multimedia Signal Processing. Wang. Chen and D. “Storage and Query over Encrypted Character and Numerical Data in Database”. 2004. pp. 29-38. C. 49-53. Mehrotra. Communications In Information and Systems. Y. Wang. J. Iyer. pp. “Compressed Bloom Filter”. 2002. Information Theory. 604-612. pp. “A secure Database Encryption Scheme”. Consumer Communications and Networking Conference (CCNC). Zhu. H. 2006. 2007.

From hacker’s view. which provides strong support for simulating attack model and then give some method to traverse web page aiming to detect web application vulnerability. So lots of link information can be used as an important resource to find content correlation. China. the page will be considered a good authority. the weight of Xp is updated to be the sum of Yp over all pages q that link to p: where the notation q→p indicates that q links to p. xp = q→ p ∑y q , yp = p →q ∑x q (1) For a page p. then 28 978-0-7695-3521-0/09 $25. Apriori algorithm algorithm to get some high-related rule by analyzing transaction data stored in database. this model applies the HITS algorithm to generating a series of pages which may be used by hackers as attacking. there are still some problems. because logical relation means that there could be data stream between pages. Due to ignoring content in web pages for HITS algorithm. more vulnerability can be used as attacking. Traversing problem in vulnerability detection is presented on basis of this consideration. More popular web application becomes. In a strictly dual fashion. these pages getting high weight by HITS algorithm are the place easy to generate vulnerability.n} and define their n×n Adjacency Matrix. which link in a correlated way to a thematically related set of authorities. TIAN He1 Institute of machine intelligence. this paper presents one traversing model. HITS algorithm. the weight of Yp is updated to be to the sum of Xp. Can we find out attack point from so much information as quickly as hacker in the web pages? The answer is yes. Finally.com. Keywords-software dependability. xujing@nankai.com Abstract—With more important function in information society. Authorities are the central Web pages in the context of particular query topics. HITS algorithm mines the link structure of the Web and discovers the thematically related Web communities [1. But the speed of vulnerability detection development is slower obviously. which is easy to cause low efficiency and no pertinence. Therefore. For a wide range of topics. it means that there exists some logical relation between them. software dependability has been in higher demand. GONG Dawei1. especially web pages and presents one traversing model based on high-related rule. If Pi points to Pj.com. which will be demonstrated in section 2. The base set includes pages pointed by start set and others pointing to start set. Finally it outputs a series of pages with highest weight. Other papers [4]. We define Xp and Yp as non-negative integers for every page in the base set. Nankai University TianJin. These two types of Web pages are extracted by iteration that consists of following two operations. we adapt improved Apriori algorithm to get optimized frequency set. It distinguishes the importance of links according to weight. Although the result of HITS is good. flyingdonkeycn@msn.2009 International Conference on Computer Engineering and Technology Traversing Model Design Based on Strong-association Rule for Web Application Vulnerability Detection QI Zhenyu1. These relations is the right thing hackers look for. But pagerank based on weight has nothing to do with initial parameter selection [6]. Some links from different pages but to one page could display the importance of this page.…. Therefore. Web application not only consists of web pages.2009. on basis of which we deduce high-related rule between properties of interactive unit and way of attacking. fyonatian@hotmail. INTRODUCTION Web application has been applied into many kinds of field. they can only be connected by an intermediate layer of relatively anonymous hub pages. 2] that consist of ‘authorities’ and ‘hubs’.edu.2. active detection method simulating hacker becomes the development trend. Web Vulnerability. this paper presents the traversing model for web application vulnerability detection. They provide basis for effective detection but bring unavoidable cursoriness. OVERVIEW OF HITS ALGORITHM I. XU Jing1. Thus. In section 3 we will discuss association rule. if the value of Yp is big. Detecting and solving vulnerability is the effective way to enhance software dependability. this model adapt association rule When one page links to another page. Web application vulnerability has become one of the biggest threats for software security. the page will be considered a good hub. Therefore. The rest of algorithm mainly works in the base set. Kleinberg [3] demonstrates that this algorithm convergence. [5] also have description of this algorithm. If the value of Xp is big. Finally. 300071 qi_zhenyu@hotmail.79 .00 © 2009 IEEE DOI 10. Most active method traverses all web links and interactive units in traversing step. but also hyper links、active links、active interactive units. authorities and hubs exhibit what could be called mutually reinforcing relationships: a good hub points to many good authorities. Firstly. Web pages which are browsed and interacted most frequently are the pages which interest hackers most. the strongest authorities consciously do not link to one another. Therefore. II. HITS is the classical algorithm based on hyperlinks. and a good authority is pointed to by many good hubs? HITS algorithm gives every page two weight: authorities weight and hub weight. Because this algorithm ignores the relation between content of web pages and way of attaching. This paper focuses on characteristic of web application. HITS gets start set which consists of pages through initial URL and then generate base set on basis of it. which make detection more efficient. We can mark pages by integers like {1. which contain lots of logical relations.1109/ICCET.cn.

Once all frequent itemsets from transactions in a database D have been found. if we add item A into I. a transaction T is said to contain A if and only if A ⊆ I . If all the subsets of an itemset are frequent. In database. it is straightforward to generate strong association rules from them. else 0. Association Rule Association rules mining can he stated as follows: Let = (i1 . these association rules should satisfy the minimum weighted interestingness. i2 . which is used to find L3. Let A he a set of items. After pre-processing. we could find a series of keywords. and so on.. vector x. That is. (3) If A⊆B and B is frequent itemset,then A is frequent itemset. until no more frequent k-itemsets can be found. Iterative sequences will convergence at Eigenvector by standardization.. where each transaction T j (j=1,2,⋯ ,n) such that T j ⊆ I . This is taken to be the probability P(A B). which can be got and read by user easily. every document could be viewed as a transaction. Tn ) . x2. upload.. some keywords in webpage document also should be paid attention such as username. It doesn't like the Apriori which fixes a new candidate after each complete database scanning. In other words. the set of frequent 2-itemsets. Admittedly. so I U A is not frequent.. APL reduces the size of the database and improves the efficiency of the algorithm. This set is denoted L1. y ← Ax ← AAT y ← ( AAT ) y (3) Therefore. an improved algorithm for association rule mining named APL is proposed. When we collect all HTML documents of one web application displaying in the client side. These tags are keywords for analyzing HTML document. y=(y1. download etc.the value in matrix of (i. it is important to find relationship between web application page and hacker attack. The finding of each Lk requires one full scan of the database. Since main source where hackers get information is information in webpage. The rule has confidence c in the transaction set D if c is the percentage of transactions in D containing A that also contain B. So database could be displayed as follows: {document_id. keywords in HTML should be collected. ASSOCIATION RULE EXTRACTION IN WEB APPLICATION In client side web page is actually HTML document. called TID. …. yn).e. then the matrix form of formula (1) is: (2) x ← AT y . The rule A → B holds in the transaction set D with support s. 29 . Finally. we define authority weight and hub weight as x=(x1. The popular association rules Mining is to mine strong association rules that satisfy the user-specified both minimum support threshold and confidence threshold. both A and B). Apriori algorithm Apriori algorithm is an influential algorithm for mining frequent itemsets for Boolean association rules.. employs an iterative approach known as a Level-wise search. password. In the meantime.then L1 is used to find L2. namely P (I) < minsupport. First the set of frequent 1-itemsets is found. It dynamically values the support of all the counted itemsets. hub weight and authority weight are intrinsic property of page. T2 . …. im ) be a set of items. Theorem 2: If a transaction does not contain any item set in Lk-1. minconfidence and minsupport... y2. if itemset I is not frequent. Based on a strategy of transaction database fuzzy pruning. the itemsets will be as a new candidate. Frequent k-itemsets is always marked as LK. This is taken to be the conditional probability. Several Theorems are introduced as follows: (1)If A⊆B,support(A)≥support(B).. That is. (2)If A⊆B and A is non-frequent itemset,then B is non-frequent itemset.. Based on the definition of the frequent itemset. The problem of mining association rules can be decomposed into two major steps: 1) Find out all frequent itemsets. the result itemsets (I U A) can not be more frequent than I. be a set of transactions in a database.. search. This type of analysis collects keywords displaying together and then find their relation. y can be deduced by formula (3). Each transaction is assigned an identifier. We can find out some association rules in web pages. where k-itemsets are used to explore (k+l)-itemsets. association rule [7-9] could be used as extracting high-related rule. confidence(A->B)= P (B|A)=support(A B) / support(A)=c . where strong association rules satisfy both minimum support and minimum confidence.. 2) Use the frequent itemsets to generate the strong rules. If support(A->B)≥minsupport and confidence(A->B)≥minconfidence , A->B is strong correlation.. So it has nothing to do with initial vector and selected parameter [6]. HTML document is actually general text plus special tag. Theorem 1: Any item set in Lk must be the super set of a certain item set in Lk-1. An association rule is an implication of the A → B where A ⊆ I , B ⊆ I and form A ∩ B = ∅ . According to Linear Algebra theory. all of them constitute document database. j) is 1. III. P (B|A). T j . where s is the percentage of transactions in D that contain A B (i. then deleting this transaction will not affect the calculation of Lj (j > k) The conclusion of Theorem 1 and Theorem 2 is that a I Let D = (T1 .. xn). Because there is association between these text and interactive form corresponding to some keywords. Improved Apriori algorithm API Based on the Apriori algorithm. the task-relevant data. Similarly. If support(X)≥minsupport. support(A->B)=P(A B)=s.. set_of_keywords} A. B. It can add new candidate at any start point. y ← Ax Stretching out formula (2) we can get: x ← AT y ← AT Ax ← ( AT A ) x . Association analysis deals with etyma and then removes useless words. C. X is frequent itemsets. and keywords in document could be viewed as items.

this strong association will be used to direct vulnerability design in simulating attack. In all kinds of data. Content in the dynamic form is actually Input box embedded in the tag <form>. Through response from server. Regarding the "0" here. which could lead to a dead cycle. then it will be faster to prune the database obviously.shtml class=red>alcate bell</a>. So input and output of CGI program is the object we concern most. Based on the thinking above. which conclude programmer’s tradition、 rules in IT field.com. filename. and further reduces the time needed by the next scan. evidently theorem 3 and 4 can delete more invalid transactions.delete=1. It is fundamental for detection to collect more information that hackers are interested in. By association rule algorithm we can get a frequent itemsets. length. So we combine APL and feedback from database to enhance performance. They do interaction by browser as general users. //the candidate item set contained in t Forall candidates l Lt l. and improves the efficiency of the algorithm. prunes the database with larger scale.cgi method=" post”> <input type=hidden name=site value='com' length-10> <input type=text name=ABC Value="abc" length=10> <input type=hidden name=chatlogin value='in'> <input type=hidden name=product value='mail'> <input type=submit name=sumbmission value=“submit”> </FORM> Dynamic form can be located by finding keywords <form> and then we get the properties of form by some keywords like method. It deletes more transactions. VI. It is meaningful to combine experience of test export to web application traverse. So they get information just by analyzing HTML document. further reduces the records which will be scanned next time. post.sina. minimum support threshold minsupport Output: All the frequent item sets in the database D. But its performance is not good enough. Theorem 4: If a transaction contains less than k item sets in Lk-1. So this paper presents one traversing model more efficient from hacker’s view. Each document is stored in database. Finally. and put forward more theorems below: Theorem 3: Any item set in Lk must be the super set of k certain item sets in Lk-1. t). D. We must avoid traversing same page repeatedly. hackers can not browse source code directly. We can deduce strong relation between interactive properties and attack type. the transaction can be deleted from the database. Content embedded in Tag <href> should be found to get web pages. corresponding request must include these forms. which provides support for traversing on purpose. Compared with theorem 1 and 2. They could be used by predictable technology based on experiment. such as <a href=http://news. TRAVERSING MODEL BASED ON STRONG ASSOCIATION RULE From hackers’ view. mark this transaction a delete and consider no more in the scans later. text. As figure1 this paper designs one traversing model based on strong association. For example. find <href> to locate links and find <form> to locate several kinds of field in forms. Typical form is listed as follows: <FORM name=" myform” action=http://myWEB/a. name. if a transaction contains less than k+1 item sets in Lk. it is quicker and better. get. Traversing depth and breath could be set in advance. Input: Transaction database D. CONCLUSION AND FURTHER WORK This paper presents one traversing model for Web 30 . we extend theorem 1 and theorem 2. //the item set that new generated Forall transations t D { If (Ldelete=0) { Lt=subset (Lk. The improved algorithm. Through several cycle of optimizing. hidden. HITS algorithm outputs a series of high-weight pages by inputting initial URL. Strong association in Web Application Vulnerability Detection Algorithm: Use the mediate results of database scanning last time to prune the database. it is their final purpose to find weakness in web application. They will look for useful information continuously and then do attack on purpose by this information. input field、hidden field、selection field of form. //make a delete mark}}} Lk={l k| l. All kinds of CGI program are responsible for responding dynamic form. L1= {large 1-itemsets} return L1 for(k=2. Lk-1≠null. APL deletes more invalid transactions.transaction contains no item set in Lk-1. k++){ Ck=Apriori-gen (Lk-1). it is easy to find out that hackers attack web application vulnerability when interactive. If there are dynamic forms in response. that is to say when it is 0. V. What need to pay attention is that URL should be indeed part of web application. //return all the frequent item sets According to the output of API.count > minsupport } return L = kLk. We locate these data mainly by HTML tags. then deleting this transaction will not affect the calculation of Lj (j > k) Theorem 3 and 4 guaranteed this conclusion: during every time calculating the support of Lk. which provides data for analysis.cn/20080110/n33543. can it be further changed? If the 0 can be replaced by a bigger number. IV. these URL requests get pages in the form of document. If (|Lt |<k+l) { t. As for web application. value.count++. DATA IN WEBPAGE In HTML document there are links. Dynamic form is to generate dynamic web pages by sending request to server. the most important ones are dynamic forms and links.

Traversing Model 31 . How to search easily. Giles C L. Lawrence S. of 9th ACMSIAM Symposium on Discrete Algorithms. CA: [s. This set provides dataset for Apriori algorithm. REFERENCES [1] Kleinberg J. 1999. 2001. 1994. Proc. The model gets a set of web pages by HITS algorithm. of 9th ACMSIAM Symposium on Discrete Algorithms. 487-499 Figure 1. Authoritative Sources in a Hyperlinked Environment. CA: IBM Almaden Research Center. Henzinger M R. 1994. Authoritative Sources in a Hyperlinked Environment. 2004:150-160 [3] Kleinberg J. 32(8) [5] Bharat K.application vulnerability detection.]. Admittedly. Science. Also Appeared as IBM Research Report RJ 10076. Mining the Link Structure of the World Wide Web. Fast algorithms for mining association rules in large database[R]. The Structure of the Web. it is necessary to design a reasonable way to store data which was got from page documents. 1997-05 [4] Chakrabarti S. which improve the efficiency in the traversing process. Lawrence S. Efficient Densification of Web communities. Strong association between interactive properties and attack types could be got and then support the simulating attack model. Improved Algorithms for Topic Distillation in a Hyperlinked Environment. Dom B E. 1998 [6] Kleinberg J. In Proc. Technical Report FJ9893. In Proceedings of the ACM-SIGIR. 294: 1849-1850 [2] Flake G W. Fast algorithms for mining association rules [A]. how to take less space and how to avoid redundancy is our future work. Proc. Also Appeared as IBM Research Report RJ 10076. of the Sixth International Conference on Knowledge Discovery and Data Mining (ACM SIGKDD-2000). Srikant R.n. [9] Agrawal R. Gibson D. et al. Srikant R. AKNOWLEDGMENT The research work here is sponsored by Tianjin Science and Technology Committee under contract 08ZCKFGX01100 and 06YFJMJC0003. In: Proc of 20th Int Conf Very Large Databases (VLDB’94) [C]. 1997-05 [7] R Agrawal ,T Imielinski,A Swami.Mining Association Rules between Sets of Items in large Database[C]. Proceedings of the ACM In: SlGMOD Conference on Management of Data.1993:2O7—216 [8] Agrawal R. San Jose. IEEE Computer.

2009 International Conference on Computer Engineering and Technology Attribute-based Relative Ranking of Robot for Task Assignment B. and to obtain fuzzy suitability indices. in order to select a suitable robot. link lengths and shapes.com B. The authors suggested step by step procedure for evaluation of a robot selection index.com R. just meeting the customer requirements can be a challenge. The results of this study will help robot workcell designers to develop a more efficient and effective method to select robots for robot applications. we propose a new mathematical based methodology for robot selection to help designers identify feasible robots.2009. manipulability. The speed of operation significantly depends on the complexities of the kinematic and dynamic equations and their computations. Liang and Wang [2] proposed a robot selection algorithm by combining the concepts of fuzzy set theory and hierarchical structure analysis. The present work is an attempt to develop a systematic procedure for selection of robot based on an integrated model encompassing the manipulator attributes and manipulator requirements. India e-mail:bibhuti. The work is also aimed at creating an exhaustive list of attributes and classifying them into different distinct categories. Orissa. both aspects of kinematics and dynamics should be looked into. The selection of robots to suit a particular application and production environment from among the large number available in the market has become a difficult task. etc. Fortunately. The coding scheme for the attributes and the relative ranking of the manipulators are illustrated with example. Offodile et al. In this paper. none of these solutions can take care of all the demands and constraints of a user specific robotic workcell design. In addition. Attributes. INTRODUCTION Recent developments in information technology and engineering sciences have been the main reason for the increased utilization of robots in a variety of advanced manufacturing facilities. However.1109/ICCET. the parameters that determine the capability of the robot is heterogeneous in natrure and therefore. ease and speed of operation. Orissa.Choudhury Department of Mechanical Engineering NIT Rourkela. Zhao and Yashuhiro [4] introduced a genetic algorithm (GA) for an optimal selection and work station assignment problem for a computer-integrated manufacturing (CIM) system. Boubekri et al.biswal@gmail.But they had assumed that the user knows which robot to buy and the question was from whom to buy. Hence. joint placement. a number of tools and resources are becoming available to help designers select the most suitable robot for a new application.Biswal Department of Mechanical Engineering NIT Rourkela. The addition of system integration in workcell design processes may further complicate the picture. Orissa. India e-mail:rabindra@silicon.in Abstract-Availability of large number of robot configurations has made the robot workcell designers think over the issue of selecting the most suitable one for a given set of operations. or learn from novel situations. Rao and Padmanabhan [3] proposed a methodology based on digraph and matrix methods for evaluation of alternative industrial robots. in turn obtained from the robot selection attributes digraph.00 © 2009 IEEE DOI 10. [1] developed a coding and classification system which was used to store robot characteristics in a database. The digraph was developed based on robot selection attributes and their relative importance for the application considered. Robots with vastly different specifications are available for a wide range of applications. workspace. plan for.ac. The algorithm was used to aggregate decision makers’ fuzzy assessments about robot selection attributes weightings. Keywords-Multirobot. It deals with the issues of using past experiences or cases to understand.The capability of a robot manipulator can be assessed through parameters like number of joints. and then outline the most appropriate cases for smoothing robot selection process.N. The process of selection of the appropriate kind of robot must consider the various attributes of the robot manipulator in conjunction with the requirement of the various operations for accomplishing the task.205 . The developed procedure can advantageously be used to standardize the robot selection process with view to perform a set of intended tasks. A robot selection index was proposed that evaluates and ranks robots for a given industrial application. I. The use of DEA for robot selection has been addressed by Khouja [6]. formulating an integrated model with all these becomes a difficult task at times.Mahapatra Department of Mechanical Engineering SIT Bhubaneswar. budget requirements and comparing the suppliers of the robots.Relative ranking . Selecting the right kind of robot for an application is not easy. organizational and economical factors in the selection process.B. India e-mail:bbcnit@gmail. and then selected a robot using economic modeling. Huang and Ghandforoush [7] stated the procedure to evaluate and select the robot depending on the investment. Today’s market provides variety of robot manipulators having different configurations and capabilities. However. Eventually the designers must use the available information and make their own decisions. [5] developed an expert system for industrial robot selection considering functional. type of joints. 32 978-0-7695-3521-0/09 $25.B. The index was obtained from a robot selection attributes function.

16. 29. physical parameters. 13. 1. 33. Number of axes . These data individually or collectively help the user to select the most suitable robot for a task that he intends to perform. Although the number of joints. Programming method. 6. Sometimes it may be possible to arrive at a rational choice without formal application of some quantitative or semiquantitative methodology by mere articulation of what attributes are important in the context of particular alternatives under consideration. Time. after sales service etc. 34. 31. The identification of various pertinent attributes and their values. Number of input and output channels of the controller Availability/reliability Task Downtime and Reliability Space. MANIPULATOR ATTRIBUTES With the growth in robot applications. repeatability. 7. 15. axes. and Force The manipulator attributes are found out based on its broad area as general parameters. For instance reliability can be expressed in terms of Mean Time Between Failure (MTBF) or Mean Time To Repair (MTTR) methods. if not mentioned by the manufacturer. 8.Payload of the robot. DOF. Proper identification of manipulator attributes is critically important when comparing various alternative robots.II. Attributes like life expectancy may be estimated through experimentation.Space requirements of the robot. 14. and degree of freedom (DOF) are usually considered as basic parameters. 9. 21. 11. 32. such as type of drive. However some attributes such as built quality. These may be expressed by a rate on the scale of (say 1-10). etc.02mm 1 Resolution 0 Degree of freedom 2 7 Type of joints 0 Working environment 0 Maintainability 0 Safety features 0 Control of robotic joints Any one 3 Gripper control 0 Sensors 0 Programming method Any one 1 Number of input channels 0 Number of output channels 0 Down time 0 Reliability 0 Space 4 Time 4 DOF required 8 Force 5 Control/feedback system Control of robotic joints. Sensors. I. cannot be expressed quantitatively. horizontal reach. 2. Maximum end effector speed. which may be denoted by some number whose numerical value will have no significance . 24. Maximum reach Workspace. A robot manipulator can be specified by a number of quantitative attributes such as payload capacity. 27. Weight of the robot. 12. Gripper control. Type of grippers supported. Sl. the coordinate system. 19. performance based. The robots with same number of joints and joint sequence but 33 . TABLE I. 20. Type of joints Application Sophistication Working environment Maintainability and Safety features Attribute Information Code Price range 0 Type of robot 0 Arm geometry Any one 6 Type of actuators Any one 4 Weight of the robot 0 Size of the robot 0 Type of grippers supported 0 Number of axes 0 Space requirements of the 0 Maximum reach 1000 mm 4 Types of end effectors 0 Payload of the robot 5kg 5 Workspace 0 Stroke 0 Velocity 50 deg/sec 5 Accuracy 0 Repeatability ±0. etc. IDENTIFICATION CODE Performance Structure/architecture Degree of freedom (DOF) availability. Accuracy. in most cases the user needs to be assisted in identifying the robot attributes logically. However. MAJOR ATTRIBUTES FOR MANIPULATOR AND TASK Attribute type General Physical Parameter Price range. are given in Tabel. 17. Resolution can be coded as per the parameters coding scheme mentioned in Table II. 23. There are some attributes which are informative in nature. 26. large numbers of robots are available for various manufacturing applications. TABLE II. 10. 22. rates and estimates help the user for create a database. 25. 35. 18. The attributes The identification code in Table III specifies the attribute information with the allotted code in the respective cells. 5. Repeatability. 30. Parameter General Physical Performance THE CODING SCHEME 0 4 0 0 7 0 0 3 0 4 Code 6 0 0 0 0 0 6 5 0 0 5 0 1 0 0 0 0 0 1 0 0 0 4 8 5 Parameter coding scheme 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Structure 19 20 Application 21 Sophistication 22 23 Control 24 25 26 27 28 29 Availability 30 31 Task 32 33 34 35 TABLE III. the sequence of joints and their respective orientations and arrangements are left untouched. 4. 3.There are also attributes of which the quantification is not available and needs to be done by some mathematical model and analysis. 28. Size of the robot. Stroke. Type of robot and Arm geometry Type of actuators.

The information supplied by the manufacturer to the user is not sufficient and it is required to be more elaborate.n ⎤ ⎡ v1. which will have direct effect on the selection procedure.1 V=⎢ ⎢ ⎢ ⎢ w 1n m. The value obtained from the weight matrix is applied to the normalized specifications since all attributes have different importance while selecting the robot for particular application. some of these considerations also make the robot selection process more complicated.with different joint orientations will have different performance characteristics. one at a time.e. The threshold values to these 'pertinent attributes' may be assigned by obtaining information from the user and the group of experts. for the i th robot. The next step is to obtain information from the user or the group of experts on the relative importance of one attribute with respect to another. The first activity mainly consists of listing the requirements of robots and the desired operations. This phenomenon is used to calculate the normalized specification matrix. It is a value.2 v1. These attributes may be set aside as 'pertinent attributes' as necessitated by the particular application and/or the user. 34 . can be quantified as manipulability measure and can be used as an attribute. needs to rank these solutions in order of merit. The robot selection criteria should include the key parameters such as. Based on different robot applications. is weighted normalized matrix. a shortlist of robots is obtained. to be carried out by a suitable robot where it is expected to avoid some obstacles. ii) coding scheme.1 ⎥ ⎢ ⎥ ⎢ … w n n m. is then formed from the decision matrix. It gives the true comparable values of the attributes. To facilitate this search procedure an identification system has been made for all the robots in the database. A mini-database is thus formed which comprises these satisfying solutions i.2 w n n 1. termed as manipulability.Hence they are normalized.1 ⎦ ⎣ … … w 2 n m. The selection procedure. Values of all such comparisons are stored in a matrix called as weight matrix.. which combines the relative weights and normalized specification of the candidates. The matrix. iii) selection of attributes. The first step is to represent all the information available from the database about these satisfying solutions in the matrix form.1 ⎢w n 1 2. There will be few attributes. Recent advances in robotics technology allow robotic workcell engineers to design and implement more complex applications than ever before. therefore. alternatives which have all attributes satisfying the acceptable levels of aspiration. ‘D’. The robot selection system can be divided into four activities. Therefore an element dij of the decision matrix ‘D’. This coding scheme can be used as it is for the visual comparison between two robots up to certain extent. The main attributes have been broken down to sub-attributes and sub-sub-attributes so that the robot manipulator can be identified in very precise and detailed manner. maximum speed. are of different dimensions and units.1 ⎥ ⎢v ⎥ = ⎢ 2. ‘V’. cost. ‘D’. pay load. gives the value of jth attribute in the row (non-normalized) form and units. which indicates the standing of that particular attribute magnitude when compared to the whole range of the magnitudes for all candidate robots. The normalized specification matrix.1 ⎣ w 2 n 1. The ‘0’ represents that the information relating to the particular cell is not available. The normalized specification matrix has the magnitudes of all the attributes of the robots on the common scale of 0 to 1. The robot selection attributes are identified for the given application and the robots are shortlisted on the basis of the identified attributes satisfying the requirements. ‘N’. On the basis of the threshold values of the pertinent attributes. This is achieved by scanning the database for those attributes. which have one or more of these attribute values that fall short of the minimum required (threshold) values. Normalization is used to bring the data within particular range. Further. This architecture gets representation in the formulation of the problem for the present work. i) operation requirements and data library of robots. Moreover. it provides the dimensionless magnitudes. maximum reach. to eliminate the robot alternatives. some of the robot selection criteria can be ignored and others may become critically important. THE ROBOT SELECTION PROCESS to one attribute under consideration. 2 v1. d ij is an element of the decision matrix. An element n ij of the normalized matrix ‘N’ can be calculated as. Each row of this matrix is allocated to one candidate robot and each column nij = d ij 1/ 2 ⎛ m ⎞ ⎜ d 2⎟ ij ⎟ ⎜ ⎝ i =1 ⎠ ∑ where. ILLUSTRATIVE EXAMPLE The example considers a task of pick-and-place. III. and repeatability. where wij contains the relative importance of ith attribute over the jth attribute. It allows faster comparison in various formats.1 V. ‘D’. The problem is now one of finding out the optimum or best out of these satisfying solutions.n ⎤ ⎥ ⎥ ⎥ ⎥ v1. This can be obtained as follows: ⎡ w 1n 1. which are given in Table V. The objective values of the robot selection attributes. degrees of freedom. Henceforth the selection procedure focuses solely on the pertinent attributes leaving out the rest. ‘W’. swept area. In order to demonstrate and validate the methodology of the proposed method five robots with different configurations and capabilities are considered. The minimum requirement for this application is tabulated in Table IV.n ⎥ ⎢ v1. Such a matrix is called as decision matrix.1 ⎥ ⎦ v1. The ease of operation. and iv) ranking of robots.

344 0 .713 0.388 0 .241 0 . management constraints and corporate policies.5 1.844 0.628 0.155 0 .57 3 5 ⎥ 3.171 ⎢ ⎣ 0 .406 0 . and total score σ.691 0 .019 0 . 174 0 . The procedure for the selection of the robot is as follows :Step 1: Formation of decision matrix.78 3 5 ⎥ ⎥ 3. Robot-5 and Robot-4 has the highest ranking factors should be recommended as the best robot alternative. the factors such as economic considerations.V 0.5 2 1 1 . The calculations of ranking factors are made for the other robots with a total of 12 different sets of weights.262 0.447 0 .5 Time(TE)* 0. Parameter MR DF PL VL AG AT CM RT RP SC TE DF1 FR RANKING FACTOR WITH ONE SET OF WEIGHTAGE OF ROBOT-1 Value N.259 0. The normalized values of all these parameters are taken to form the decision matrix.344 4.796 2.5 3 2 2. 571 0 . 237 0 .146 0 .285 0 .287 0 . before a final decision is taken to select a new robot.7715 .5 2 . 252 0 .5 1 DF1 FR ⎤ 13 2 ⎥ ⎥ 12 3 ⎥ ⎥ 11 4 ⎥ 10 5 ⎥ ⎥ 9 6 ⎥ 8 7 ⎥ ⎥ 7 8 ⎥ 6 9 ⎥ ⎥ 5 10 ⎥ ⎥ 4 11 ⎥ 3 12 ⎥ ⎥ 2 13 ⎦ Step 3: Calculation of the normalized specification matrix.171 ⎢ ⎢ 0 .979 0.TABLE IV.438 00 .796 0 . 61 0 .345 0 . These factors were not previously considered in coding and evaluation.462 5.276 0 . availability.22 6 2.691 0.02 3 0. According to the results obtained and the analysis thereby.359 3 5 *These values pertain to task-1 of the fifteen tasks actually considered for the problem. ‘W’.2 3 5 Step 4: Calculation of normalized value (N. 259 0 . ranking factor ( σ =W * N).447 0 . The 1st and Step 2: Formation of weight matrix.876 0. ∑ σ.447 ⎤ 0 .339 0. 174 0 .072 0 .5 2 2 2.0 8 0.109 0 .3 3 5 Robot-5 5500 6 60 250 24 7 8 1.4895 0.505 0 . ∑ Step 5: Calculating the average of the ranking factors of all the robots TABLE VI. 674 0 . etc.1 4 0.034 5.5 1 1 6 13 2 Ranking factor(σ) 10.2 0.447 W 13 2 6 1 3 1 6 0.V.398 0 .3 0. However.475 0 .344 0 .356 0 .5 6 0.33 8 5 TE DF1 FR ⎤ 2. CRITERIA FOR ROBOT SELECTION Criteria Robot-1 Maximum Reach(MR) 1000 DOF(DF) 2 Payload(PL) 5 Velocity(VL) 50 Arm geometry(AG) 4 Actuator(AT) 7 Control mode(CM) 4 Repeatability(RT) 0. may be considered.146 0.195 0 .5 3 . 876 0 .647 0 .337 0 .447 0 .241 0 .28 3 5 Robot-4 5000 5 40 200 20 10 10 1.02 Robot programming(RP) 3 Space(SC)* 0.5 0 .337 0 .461 0 .447 ⎥ 0 . However only one task has been considered for calculation due to page constraints. 713 0 . ⎡ MR DF ⎢1000 2 ⎢ ⎢2000 3 D =⎢ ⎢5000 4 ⎢5000 5 ⎢ ⎢5500 6 ⎣ PL VL 5 50 10 90 30 120 40 200 60 250 AG 4 9 20 20 24 AT 7 10 3 10 7 CM 4 6 8 10 8 RT 50 10 2 1 1 RP SC 3 2 4 2.159 0 .435 0 .674 0.447 0.844 ⎢ 0 .447 0 . Sl 1 2 3 4 6 7 8 9 Parameter MINIMUM REQUIREMENT OF A ROBOT Values minimum 5 kg.5 6 3.447 ⎥ ⎥ 0 .339 0 . V.345 0 .4 0.323 0 .45 0.447 ⎥ ⎦ TABLE V.3 3 5 Robot-3 5000 4 30 120 20 3 8 0.5 0.437 0 .972 1.019 0 .877 0 .5 RP SC TE 1 1 2 3 4 5 6 6 5 4 3 2 1 6 5 4 3 2 1 1 2 3 4 5 6 1 1 .796 0.RESULTS AND DISCUSSION The robots are arranged in order of their ranking factor based on the significant attributes chosen keeping the application of the robots in view. ⎡0 .).447 ⎥ ⎥ 0 .811 0.33 3 5 ⎥ ⎥ 3.199 0 .145 0 .447 0 . 0.359 DOF(DF1)* 3 Force (FR)* 5 Robot-2 2000 3 10 90 9 10 6 0.02 mm at least 50 deg/sec any one at least 2 any one any one any one Load capacity Repeatability Velocity Types of drives Degree of freedom Arm geometry Control mode Robot programming ⎡ MR DF ⎢ 13 2 ⎢ ⎢ 12 3 ⎢ 4 ⎢ 11 ⎢ 10 5 ⎢ 6 ⎢ 9 W =⎢ 8 7 ⎢ 8 ⎢ 7 ⎢ 6 9 ⎢ 10 ⎢ 5 ⎢ 11 ⎢ 4 ⎢ 3 12 ⎢ 13 ⎣ 2 PL VL 6 5 4 3 2 1 1 2 3 4 5 6 1 2 3 4 5 6 6 5 4 3 2 1 AG 3 4 5 6 7 8 9 10 11 12 13 14 AT 1 2 3 4 5 6 6 5 4 3 2 1 CM 6 5 4 3 2 1 1 2 3 4 5 6 RT 0 . The values of these ranking factors for all the robots are given Table VII.5 2 .0 6 0. 331 0 . for one set weights in robot-1 is presented in Table VI.877 0.979 0 .33 3 5 ⎥ ⎥ 5 3 5 ⎥ ⎦ The calculation of the total ranking factor.894 ∑σ 1000 2 5 50 4 7 4 0.039 0 . ‘D’.5 1. 803 0 .344 0.5 3 3 3 3 .428 ⎢ N = ⎢ 0 .259 2.674 0. 35 35.

PP. K. 18. 16. ‘Medium’ and ‘High’ and in relation to the group of the robots under consideration and are shown in Table VIII.251.5605. 19.The procedure provides a coding system for robots depicting the various attributes.053. As a result of the application of both numerical and qualitative inputs and outputs.1235. Huang.Resare. “Genetic algorithm for robot selection and work station assignment problem” Computers & Industrial Engineering . 41. 22:P.2915 17. Essentially the present work contributes to developing a methodology based on matrix methods which helps in selection of a suitable robot from among a large number of available alternative robots 36 . International Journal of Production Research. “Selection. 18. 19. Int J Prod Res 1987. 24. 19. [4] L. and processes the information about. Yashuhiro. 2000.304 18.Khouja.V.651. Lakrib.5215. The average values of the ranking factor are presented in Fig.Layek . “Development of an expert system for industrial robot selection”. 28. 35.0915. P. 25.8255. 17.4325.8495.1996. Ranking curves of robots Average Ranking Factor 40. relative importance of attributes for a given application without which inter-attribute comparison is not possible.9895. 21. 23. [2]A.6995 TABLE VIII.7325. 19. 1.4805 16. 25:PP. 26. “Development of a computer aided robot selection procedure (CARSP)”. 43. 27.7655.2025.6385. 19. TABLE VII.35 attributes of the robots are identified and consciously coded to take care of the characterstics of a robot manipulator precisely.2006. 20. 18.688.E. Comparison of robots Although ranking of robots on the basis of the manipulators parameters alone has been attempted by some previous researchers. The ranking curves of robots are shown in Fig. Booth. 18.799. 41. Figure 2.K. Comput Ind Eng . 38. It recognizes the need for. 22. [7] P. 24. 14.2005.This is sure to help the designers and users in selecting the robots correctly for the intended application.5535. REFERENCES [1] O. 19. 17. 22.Rao . 21.P. Industrial Engineering 16 .7355.1445.323–339.1109–12.504.Zhao .8415. 25. 19. “Algorithm based decision support system for the concerted selection of equipment in machining/assembly cells”. pp. 19.J. 22. 44–48. 25. 21.659.0775. 19.7315.731.31:PP. “Fuzzy clustering procedure for evaluation and selection of industrial robots”. 37. 43. 38:PP.8095. 28. 19.4015. The present work considers practcal aspects and takes and takes experimental data to form an integrated model for relative ranking of the available candidate robots. 19.9215. “Robotics procedures given for evaluating selecting robots”.2245.M.A.6835.599–602. 42. L.T. 1984. 39.703 18. M.7625. Boubekri .1855.373–383. ranking of the robots in view of performing a given set of tasks is a novel attept. 19.8435. Journal of Manufacturing Systems. VALUES OF TOTAL RANKING FACTOR Robot Robot-1 Robot-2 Value of ∑ σ with different set of weights.6293 26. 19.1815.9975.6095.8675.Y. 22. 36. 19.798.5315. No 1 2 3 4 5 Robot Robot-1(R-1) Robot-2(R-2) Robot-3(R-3) Robot-4(R-4) Robot-5(R-4) SCORES OF ROBOT Relative Ranking 4 3 2 1 1 Relative Rating Low Medium Medium High High Robot-3 Robot-4 Robot-5 Figure 1. 2.Lambert and R. [5]N.In the intial phase of the formulation .17 21.8625. [3]R. two robot alternatives are found to be more efficient compared to other candidates. The methodology developed through this work can be applied to any similar any similar set-up .857.244.119– 27. 19.5445.1735.6875.703 The present work is aimed at developng a generalized tool to combine manipulator attributes and task requirements in a comprehensive manner for relative ranking of the manipulators.Padmanabhan. On the basis of the ranking factors the robots are rated as ‘Low’. identification and comparison of industrial robots using digraph and matrix methods”.Offodile . In order to discriminate between these two robot alternatives the ranking factor should be looked at.1005. Sl.2nd ranked robots have the highest figures amongst all the robots.4955.5175 17. 18.63.9205.1991.2665.794.Sahoui and C.8405. Robotics and Computer Integrated Manufacturing. 26. [6] M. 27. and D.K. 42. 19.2805.Dudek. 20:PP. 23. Ghandforoush.6215.926. 1995.F.

REFEREE[4]. This model bases on the Beta distribution function of expression for posteriori probability estimates of binary events. Jøsang proposed a trust management model based on subjective logic [6][7]. The literature [11] defined the assurance trust and global reputation. (2) Most of these calculate arithmetical average value to integrate the trust degree from different recommendation paths. it needs to get the trust information of K from entity J who has interacted with K. e. EigenRep model [8] is a classic global trust model at present. experiences) in order to make the evaluation of the trust degree more flexible and reliable. it’ll result in some problems [9]: (1) Using events probability to describe and measure trust relationship will result in confusion between the subjectivity of trust and the random probability. assured the anonymity of entities and the secrecy and integrity of trust value and prevents aiming cheat and combining cheat. certainty density. the Evidence-based models are able to reflect the variety of the environment. The experiments show that this two-dimensions-based model can not only avoid the trust accumulation spoofing better but also calculate the initial trust value more effectively. In the Credential-based models.2009. It is hard to eliminate the impact of malicious recommendation. inaccuracy and evolvement adequately. The literature [12] proposed a formalism method for subjective trust using cloud Abstract Trust models like Beth. School of Electronic and Information Engineering. Institute of Electronic Technology. and some other learners brought fuzzy theory. and then combines the history of interactions.g. Information Engineering University. the essence of trust is using a precise and reasonable method to describe or deal with the complicated relationships. Beth model classifies the trust into direct trust and recommendation trust. Liu Chen2. PolicyMaker[1]. functions) which is characterized by the number of positive events and negative events that are observed. e. cloud model into the researches of trust management. (3) Although these models have calculating formula about trust. No.00 © 2009 IEEE DOI 10. The literature [10] proposed a new method based on fuzzy theory according to the third party’s trust values of recommendation so that the assess of users trust values become more flexibility and more reliable. There are some classic Evidence-based models.g. gives the pcdf (probability. described the trust relationship effectively.2007AA01Z479) 978-0-7695-3521-0/09 $25. Beth model. China 2. most of them don’t have an effective way to get the initial trust degree. However it is more reasonable to get the evaluations by both the outer information and the inner attributes. Beth * This work was supported by National High Technology Research and Development Program of China (863 Program. In the open environment. The kernel of it’s is that if entity someone wants to know global reputation values of any entity K. Jøsang and EigenRep evaluated trust degree by the history of interactions or the reputations. Wang Yuqiao2 1. All of these models based on the trustee’s outer information. At present the study of trust management models can be divided into two parts: Credential-based models and Evidence-based models[3]. and brought in the concept of evidence space and opinion space to describe the trust relationship and measure it. KeyNote[2].86 37 .Blaze brought forward the concept of trust management and the corresponding trust model system PolicyMaker[1] and KeyNote[2] for the first time in 1996. Zhengzhou.1109/ICCET. Xi’an Jiaotong University. this paper gets the trustee’s inner attributes through the remote attestation mechanism under the trusted computing specification of TCG.com model [5] brought in the concept of experiences to describe and measure the trust relationship.2009 International Conference on Computer Engineering and Technology A Subjective Trust Model based on two-dimensional measurement* Chang Chaowen1.2. whereas the Evidence-based models consider subjectivity. Maybe it is not very reasonable. It is very difficult to give a standard definition because of the complexity of trust relationship and the variety of application requirement. 1. What all of these typical models considered is the outer information of the trustee. Jøsang model and EigenRep model. Aiming this issue. Xi'an. China ccw@xdja. Introduction M. and designs a trust model based on two-dimensions (attributes.

which offers us an effective method to get the inner information of trustee.1 Definition and Properties of Trust 1) Roles in Trust Relationship Trustor: The proposer of the interacting services. For example. Definition 3: Inner attributes trust degree is the trust degree of trustee’s inner attributes. The specification of TCG has defined the protocol for the remote attestation [14]. these ameliorations merely consider the outer information as the same as classic models instead of the inner attributes of the trustee. there is no absolute trust or absolute distrust.u }∈[0. estimating trust degree of the trustee synthetically can make the measured results more accurate. the interaction between the two parties may also get fail because of the trustee’s low enthusiasm to offer some services to trustor. As we know. the outer factor or inner factor. and the measured values should be preserved in the Platform Configuration Register (PCR) to prevent them from being tampered. Combining the remote attestation mechanism under the trusted computing specification of TCG with the interaction experience between the two parties.However. It is also the verifying party. present models emphasize particularly on only one factor. This definition emphasize that although trust is a subjective behavior. outer information is the phenomena and inner attributes is the essential. Based on TPM.d. the trustor then can estimate the security state of the trustee according as the integrity report.model. Thereby uncertainty can also be called relativity. the configuration will be changed and attested unsuccessfully. 2) Definitions Definition 1: Trust is to believe others. The aggregate is in the situation: B + d + u = 1. the measurement values in PCRs cannot be tampered and written in advance. The Trusted Platform Module (TPM) [14][15] is the foundation stone of trusted computing. based on which. the platform configurations may not reflect the actual state of the platform sometimes [19][20]. trust is a dialectic unification of subjective and objective. we can know the main application attributes’ present state is trusted or not. 3. When a attestation begins. Definition 5: integrated trust degree is the mean value of the inner attributes trust degree and the behavior trust degree by weight. All of them are continuous values. thereby we can evaluate the trust degree at a bran-new aspect. but the platform state is still secure and trusted. here it means the trust grade of interactions history with other nodes or the recommendation of others and so on. even though platform state attests successfully. trustee convey the PCR values. is a cognitive process that trustor believe the trustee has the ability to meet its requests through judging the inner attributes and the outer behaviors of trustee. 3) Properties of Trust According to the definitions above. Stored Measurement Log (SML) and other correlative information to remote trustor by a trusted way [18]. and a qualitative reasoning mechanism of trust cloud to enable trust-based decisions.u} to describe the trust relationship. there are close relevancy between outer information and inner attributes. the trustee must set the TPM chip. In the trusted node with TPM. What is more. So it is hard to avoid being unilateral. Definition 2: Trust degree is the quantitative denotation of trust grade. Integrity measurement [16][17] is a very important function of trusted computing.d . A trusts B doesn’t mean that B trusts A too. So it is also not integrated and objective to evaluate the trust degree only by the TPMbased attestation. any module must be measured before it obtain the control right. it must base on the objective. thereby determines whether interact with it. 3. for everything. In our model system. 1]3 b is the trust degree. here it means the trust grade of integrity of the platform configurations. Definition 4: behavior trust degree is the trust degree of trustee’s outer behaviors. { b . It is also the verified party. Trustee: The provider of the interacting services. This paper has used subjective logic-based model for reference. and it should have some properties: Property 1: Uncertainty means it is hard to describe trust in an accurate way. Get inner attributes base on TCG remote attestation TCG offers a specification for the trusted computing and security technology definition and development of industrialized criterion [13]. used the ternary aggregate {b. Based on the remote attestation protocol. 2. we adopt the remote attestation mechanism under the trusted computing specification of Trusted Computing Group(TCG) [13] to get the inner characteristic of trustee. when software updates. Property 2: Asymmetry means trust is one-way. d is the distrust degree and u is the incertitude degree. A trust model based on two-dimensional measurement As above. 38 . However.

We can calculate the inner attributes trust degree TI = {bI. there were no interactions between the two parties. dF denotes the possibility that a single PCR value which has been validated unsuccessfully may destroy the security of the system (That a PCR value is validated unsuccessfully doesn’t mean the system is threatened predicatively. when a software updates can result in the fail validation but it is hurtles). Some other literatures think trust has the transitivity. dI . uO} by the formula: bo = r r + s +1 s do = r + s +1 1 uo = r + s +1 (3-3) 3) Integrated trust degree evaluation algorithm Integrated trust degree is the mean value of the inner attributes trust degree and the behavior trust degree by weight. Assume r denotes the number of the positive events and s denotes the number of the negative events.u} by the formula: b = WIbI + WObO d = WIdI + WOdO u = WIuI + WOuO (3-4) 4) the Initial Trust Value of Integrated trust degree In the initial state. trustor gets the configurations integrity information which is needed in trustee’s platform. dO.u} is: bI = f n +1− f bF bS + n +1 n +1 f n +1− f dF dS + dI = n +1 n +1 n +1− f f uI = uS + uF n +1 n +1 b = WIbI d = WIdI u = WIuI + WO (3-5) (3-1) From this formula. SML and so on. PCRn.u0}.dF. i. and get the number f of the PCR values which have been validated unsuccessfully in PCR0 . validating the configurations integrity of trustee’s platform. the set of threshold bases on the context and the subjective factors of the trustor. Assume WI is the weight of inner attributes trust degree and WO is the weight of behavior trust degree. In this paper. the behavior trust degree is TO = {0. dS. We use the theory of evidence space in Jøsang model [21][22] for reference. uF}. s = 0. we can calculate the integrated trust degree T = {b. both uS and uF is 0.Property 3: Incomplete transitivity means A trusts B and B trusts C.0.g. Here.1}.f PCR values which have been validated successfully have the same trusted situation. 3. we suppose that recommendation is only a reference but not the unattached way to calculate trust degree..PCR1 . 2) Behavior trust degree evaluation algorithm Then trustor regards interactions are allowed. the ternary aggregate is {bF . The evaluation of trust contains three conditions as follows: Condition 1: integrated trust degree satisfies In the situation that trust neither the trust degree nor distrust degree doesn’t attenuate. thus it is hard to avoid the malicious recommendation effectively. regard the events that have been validated unsuccessfully as the negative events and the events that have been validated successfully as the positive events. uI} by a simple formula: Behavior trust degree describes the trusted situation of the trustee’s outer behaviors. 3. T(d) < T 0(d 0 ) . and the integrated trust degree T = {b. Here. we don’t discuss how to set the threshold in this paper.3 Evaluation of Trust Trustor sets the threshold T0 ={b0. (2) Validate the configurations integrity of trustee platform. The f PCR values which have been validated unsuccessfully have the same trusted situation too. bS denotes the possibility that the module has not been impacted by the malicious code (modules may impact each other due to non-isolation).e. we can calculate the behavior trust degree TO = {bO. uS}.d.d0. but A could not trust C completely even though B recommends it.d. WI +WO=1. Assume that the n + 1 . The formula is: f n +1− f bF bS + n +1 n +1 f n +1− f dF dS + dI = n +1 n +1 uI = 0 bI = (3-2) T(b) ≥ T 0(b0 ) . T(u) < T 0(u 0 ) trustee trusted. e. we can see that the initial trust value can be obtained by the inner attributes trust degree. the information contains PCR values. … .2 Descriptions of Trust Relationship 1) Inner attributes trust degree evaluation algorithm (1) Through the TCG protocol. the 39 . the ternary aggregate is {bS. the parameters r = 0.

After the two parties interacted successfully for some times. {bF. 4.0. We will contrast this model with the subjective logic-based model and the assurance-based trust model to illuminate its advantage in anti-cheat aspect. The assurance-based trust model can make the trustor who suffered loss adjusts its states. T(d) ≥ T 0(d 0 ) . we can suppose that every interacting evidence will influence the trust degree equally.0}.36. β value will decrease. if they are instable. {bS. There is a problem in this method: Assume that trustee offered services honestly at the first stage. the malicious code run to destroy the integrity of the platform at the 15th time. the number of PCR values which are validated unsuccessfully is 2.2}.8. We call this problem as malicious accumulation of trust degree.uF}={0. Attenuation element β ∈ {0.8.6. In initial time. T(u) < T 0(u 0 ) Then trustor regards trustee distrusted.dF.e. the trust degree is at a high level at the beginning. 0. n+1=10. malicious code destroyed the integrity of the platform but trustee concealed it deliberately and continued to offer the service.6.0}.dS.2. At the beginning of this experiment. The subjective trust model based on assurance [11] has proposed the concept of attenuation element that can decrease the impact about the above problem. The trust degree is influenced mainly by the latest interactions.2.3.dS.0.8.8. The smaller this value is. so that the trust degree could accumulate to a very high grade.5. the threshold T0={0.3. β m denotes the β value at the m time.0}.e. and the 40 . T(d) < T 0(d 0 ) . n+1=10. β1 = 1 . the trust degree in the Jøsang model will accumulate to a high level gradually. The model based on two-dimensions could detect the startup of the malicious code. the former 30 times are successful and the later 10 times are unsuccessful.0. the initial values are all {0.2}. WO=0. but it cannot avoid the cheat from happening.1 Experiment 1: Avoid Malicious Accumulation of Trust Degree The calculating formula of the subjective logic-based model is as formula 3-3.1} . and the faster the interacting experience is forgotten.4. Until the first time that interaction is fail. the number of positive events r=5 and the number of negative events s=5.5.0}. there are 40 times of interactions between the two parties totally.64. 0}.2. After running of the malicious code. {bF. PCR values are all validated Fig. i. but it doesn’t avoid the threat from happening. the integrity has been destroyed. the interactions are un-allowed. β decreases a certain value ∂ until to a certain lower limit. the configurations of trustee’s platform are all right.dS.0. WO=0.e.0}. Parameters are set as follows: The number of PCR values which are applied by the trustor is 10. {bS.0. all interactions are successful.uF}={0. The trust degree in the assurance-based model descends much faster than the Jøsang model when interactions begin to fail.0. The trust model based on two-dimensions proposed in this paper can avoid the cheat effectively.uS}={1.0.1 the Trend of Trust Degree The analysis of the result: In the former 30 times. Then every time the interacting successfully when configurations are all right.1 has described the trend of trust degree in our model. the number of PCR values which are validated unsuccessfully is 2. f=0.0. WI=0. The variety of β can also reflect the stability of the interactions. then it offered one malicious service suddenly.2. the number of positive events r=0 and the number of negative events s=0. The node repeats to join into the network 5 times independently.0.Condition 2: integrated trust degree satisfies T(b) < T 0(b0 ) . and descends to a quite low level immediately when the malicious code runs.uS}={0.uS}={0. f=2. Parameters are set as follows: The number of PCR values which are applied by the trustor is 10. i. i. f=2 , {bS. WI=0. T(u) ≥ T 0(u 0 ) Then trustor sends requests to validate configurations integrity of trustee’s platform. At the beginning. Emulation experiment and analysis 4. Fig.dF. Condition 3: integrated trust degree satisfies T(b) < T 0(b0 ) . the trust degree will play down. Using the formula of our model. the trustor would have not the least guard and suffer a serious loss. the 4.4. the Jøsang model and the assurance-based model in which result changes.2 Experiment 2: Set Initial Trust Value Assume that trustee is the malicious node who has destroyed the integrity of the platform. the threshold T0={0. the more obviously the trust degree attenuates. so that some loss can be avoided.

Knapskog S J.The Research on Key Technologies of Trust Management.Vol. Borcherding M. [22] A.14.World Wide Web Journal,1997;2(2):l27~139 [5] Beth T.2007. [20]L. vol. June 2007. Trust-based Decision Making for Electronic Transactions [EB/OL]. In: ASIACC'07[C]. 2006. Stueble. Klein B.au/staff/ajosang/paper.8 No.28 No.Trent Jaeger. Zhang Guang-wei. Initial values 1 0. (in Chinese) [13]TCG Best Practices Committee. DC. Loehr. Brighton: Springer-Verlag. 0.1. [18]Trusted Computing Group. [21] A. (in Chinese) [4] Chu Y-H , Feigenbaum J , La Macchia B et al . REFEREE : trust management for Web applications [J].2 The analysis of the result: There will be a very large possibility (the probability in this experiment is 3/5) to result in that trustor trusts the malicious node and interacts with it by the mechanism of random number.19 No.Xiaolan Zhang.Gray. In Proceedings of the 1st ACM Workshop on Scalable Trusted Computing (STC’06).8. Conclusions In the open networks. No. Property-based Attestation for Computing Platforms: Caring about properties. Stuble. Kang Jian-chu.6. http://security. vol. Journal of System Simulation. Valuation of trust in open networks. Implementation.2. EigenRep: Reputation Management in P2P Networks[C].1996.trustedcomputing group. 6. france. the Evidence-based trust models consider subjectivity. 2004. 0.Blaze. [19]Sadeghi. A Metric of Trusted Systems. Tab. 2003: 123-134. 0 3 0. 0. A protocol for property-based attestation. TPM Specification v1. This paper designs a trust model based on two-dimensions to make the evaluation of the trust degree more flexible and reliable. Schlosser MT. [9] Li Xiao-yong. Leendert Van Doorn.2007.html. ACM Press. so that the essential characteristics of trustee cannot be reflected directly and exactly. https://www. and Usage Principles for TPM-Based Platforms Version 1. Subjective Trust Model Based on Assurance in Open Network. The experiments show that this twodimensions-based model can not only avoid the trust accumulation spoofing better but also calculate the initial trust value more effectively. Rohe.Sadeghi. CA : USENIX press. https://www.Feb.Feigenbaum. A. Anecy.5. [15] Trusted Computing Group(TCG) . Trusted platform module protection profile,July 2004. [16] Reiner Sailer. Model of trust values assess based on fuzzy theory. 541-549. Design and Implementation of a TCG-Based Integrity Measurement Architecture[A]. [17]Xinwen Zhang. in New Security Paradigm Workshop (NSPW). Proceedings of the 4th Nordic Workshop on Secure Computer System(NORDSEC’99). 2002.2004. 0. Amsterdam:IOS Press 2005. SecureBus:Towards ApplicationTransparent Trusted Computing with Mandatory Access Control[A].1510-1521. [2] Jøsang A . A Logic for Uncertain Probabilities[J].mechanism of random number get 5 distinct initial values.Washington.USA. [7] Jøsang A. Wien: Austruan Computer Society. [6] Jøsang A. Global IT Security[M]. (in Chinese) [10]Zhang Yan-qun.2.edu. Fu Jiang-liu. Computer Engineering and Design.18.In: Proceedings of the 12th World Wide Web Conference Budapest: ACM Press. Web Intelligence and agent Systems.6. vol. However.2 Design Principles. A Model for Analysing Transitive Trust.[EB/OL]. not mechanisms.9.1 5 0. 1994.3. 2005. Gui Xiao-lin.2007: 117-126. Landfermann. A. 0.org/specs/TPM/. Shen Chang-xiang. inaccuracy and evolvement adequately. 1998. 0. [EB/OL]. Proceedings of the 1996 IEEE Symposium on Security and Privacy.1 4 0. 1-5 July.TCG Software Stack(TSS) Specification v1. so it is needed to be studied in the future works.Covington.0. H. In: Gollmann D. Songqing Chen and RaviSandbu. 0.Singapore:ACM Press. Journal of Information Engineering University. (in Chinese) [12]Meng Xiang-yi. 1999. Li He-song.Jøsang. Liu Chang-yu.Jun.2. International journal of Uncertainty , Fuzziness and Knowledge-based Systems,2001,9(3):279—311. [3] Yuan Shi-jin.Jøsang.trusted computinggroup.July. and C.2.4.1. 3-18. 2004:223– 238.2007. and C.1 the Initial Trust Value for 5 Random Numbers Seq. the improved model does not consider the recommendation trust.J. Proceedings of the European Sysposium on Research in Security (ESORICS). In:Thirteenth Usenix Security Symposium[C].Lacy. R.J. Research on Dynamic Trust Model for Large Scale Distributed Environment. ed.-R.pp. 5.IEEE Computer Society. [11]Gao Cheng-shi. and M.164-173. San Diego.R. M.pp.dstc. Doctor Degree Paper for Fudan University. Liu Yi. 0.Decentralized Trust Management. E. Zhang Chen. 0. Proceeding of the 9th Internation Conference on Information Processing and Management of Uncertainty in KnowledgeBased Systems (1PMU 2002). The classic Evidence-based trust models considered trustee’s outer information.org/specs/. Research on Subjective Trust Management Model Based on Cloud Model. Subjective Evidential Reasoning[C]. [14]Trusted Computing Group(TCG).1 2 0. Journal of Software. References [1]M. May 2005. Chen. [8] S Kamvar. 41 . Design. Michael J.2006.Kinateder.7.

978-0-7695-3521-0/09 $25. which is required for identifying the temporary entities. The Entropy idea is particularly useful for investigating contrasts between sets of data. we generate most scenarios and run them in the Visual SLAM software (for to obtain the extinct answer. Also. Figure 1 presents the proposed simulation and multi attribute approach for optimum operator assignment. simulation. part number. is a criterion for the amount of uncertainty. Introduction I. decision making. values of the attributes procured by means of simulation.ac. We begin by defining the system and its components. It is usually given as a set of weights.hkor@ut. College of Engineering. The productive capacity of DRC systems is determined by the combination of machine and labor resources.2009 International Conference on Computer Engineering and Technology A Genetic Algorithm Approach for Optimum Operator Assignment in CMS Ali Azadeh. in information theory. we consider number of run equal 30) [7]. the entropy method is the appropriate method. where the number of operators is less than the total number of machines in the system. Temporary entities are incorporated into the system represented within the simulation model and they pass and then leave the system. the development of a multifunctional workforce a critical element in the design and operation of CM systems. INTRODUCTION Cellular manufacturing systems are typically designed as dual resource constraint (DRC) systems. and which add up to one.[5] II. average lead time of demand. First. . This set is directed to achieve a specified objective. In other words. Jobs waiting to be processed may be delayed because of the non-availability of a machine or an operator or both. The importance coefficients in the MADM methods refer to intrinsic “weight”. in scenario selection problems. The attributes of each entity are considered as an entity characteristic. the entropy method works based on a predefined decision matrix. In the section fourth. The system is a set of permanent and temporary entities taking into consideration the entities attributes and relationships between them. by GA solve the problem and specify the best scenario.s_m_hatefi@yahoo. such as machines and manpower in a manufacturing system. CMS .00 © 2009 IEEE DOI 10. Then. Seyed-Morteza Hatefi Department of Industrial Engineering and Center of Excellence for Intelligent Base Experimental Mechanics and Department of Engineering Optimization Research. the decision matrix for a set of candidate scenarios contains a certain amount of information. University of Tehran. Permanent entities. We use The GA for getting near optimum ranking of the alternative with accordance to fitness function. Some works deserve mention because they include information concerning the methods that have been developed for assessing the weights in a MADM problem. Visual SLAM. METHODOLOGY This paper presents a GA approach for select the optimum operator allocation (for further information about GA method see [4]). and processing time of the system with each temporary entity [8].ac. Entropy method. direct access to the values of the decision matrix. which are normalized. represented by a discreet probability distribution. Examples of entity characteristic are arrival time.ir. Entropy. Hamrah Kor. in which there is agreement that a broad distribution represents more uncertainty than does a sharply packed one. it is generally necessary to know the relative importance of each criterion. are named as a server. the GA approach is performed by employing the number of operator.1109/ICCET. operator utilization and average machine utilization as attributes. Since there is. Furthermore. average waiting time of demand. This fact makes the assignment of operators to machines an important factor for determining the performance of cellular manufacturing (CM) systems and therefore.Genetic Algorithm.211 42 For solving MADM problems. number of completed parts.2009. and Entropy method for determining the weight of attributes. Iran Aazadeh@ut. The objective is to determine the labor assignment in CMS environment with the optimum performance.ir. The entropy method is the method used for assessing the weight in a given problem because. we show empirical illustration.com Abstract—This paper presents a decision making approach based on a hybrid GA for determining the most efficient number of operators and the efficient measurement of operator assignment in cellular manufacturing system (CMS). with this method. Keywords .

eight operators (one operator for each machine) 2. not line balancing. 2. three operators (one operator to four machines and one by one operator to two by two machines) 11. four operators (one by one operator to two by two machines and one operator for each of others) 6. Once a production batch size arrives to the cell. seven operators (one operator to only two machines and one operator for each of others) 3. Figure. any division of stations that achieves balance between the operators is acceptable. five operators (one by one operator to two by two machines and one operator for each of others) 5. four operators (one by one operator to three machines and one operator for each of others) 8. two operator(one by one operator to four by four machines) 43 . six operators (one by one operator to three machines and one operator for each of others) 7. The walking multi-functional operators permit rapid rebalancing in a U-shaped. The cells described in this study are designed for flexibility.Data Collection Input data Define scenarios Generation of outputs data by computer simulation GA for operator assignment D: Decoupler Station: Operator Operator movement when out of work Operator movement with parts Final assignment: By utilizing GA Figure. The times for the operations at the stations do not have to be balanced. The considered cell has eight stations and can be operated by one or more operators. 1: The overview of the integrated GA Simulation approach III. The balance is achieved by having the operators walk from station to station. five operators (one operator to four machines and one operator for each of others) 10. PROBLEM DEFINITION Manned cells are a very flexible system that can adapt to changes in the customer’s demand or changes quite easily and rapidly in the product design. three operator(one operator to four machines and one operator to three machines and one operator to one machine) 12. six operator (one by one operator to two by two machines and one operator for each of others) 4. it is divided to transfer batch sizes. three operators (one by one operator to three machines and one operator to only two machines) 9. Operators perform scenario movements in cell. The sum of operation times for each operator is approximately equal. Transfer batch size is the transfer quantity of intra-cell movements of parts. The ability to quickly rebalance the cell to obtain changes in the output of the cell can be demonstrated by the developed simulation model. [1] Alternatives consisted of reducing the number of operators in the cell is as follows in details: 1. depending on the required output for the cell. The existing manned cell example for a case model is presented in Fig. 2: The existing manned cell example for the case model. In other words.

ƒ The time for the operators to move between machines is assumed to be zero. and thus never be selected. As mentioned. the total distance among the first scenario which can be scenario from 1 to 36 and our goal (scenario 37). • Uniform crossover operator which combines bits from the selected parents with the probability 85%. The time processing of jobs for each of the station works is related to the part type. when the machines are assigned to the operators.They are Step 2: Standardize the indices standardized through predefined mean and standard deviation for each index. called our goal in the problem. in this study. according with the sequence in the chromosome. the scenario 37 a scenario with the best possible attribute. System performance is monitored for different workforce levels and shift by means of simulation. Consequently.In simulation experiments. the indices. The developed model includes the some assumptions and constraints as follows: ƒ The self-balancing nature of the labor assignment accounts for differences in operator efficiency. and have opposite order than the rest of Indices . ability to quickly rebalance the cell to obtain changes in output of cell can be demonstrated by simulation model. The total distance mentioned above is a dependent variable to scenarios’ positions in the array. in each chromosome we can find a new value for total distance. Step 4: Define recombination module which enlists four sections: • Tournament selection operator chooses individuals with probability 80% from the population. The types of parts that cell can product and levels of demand within cell determined as two and three in experiment. However. which illustrates the maximum present abilities in operator assignment. The above concepts of genetics are achieved through a set of welldefined steps as follows: Step 1: Normalize the index vectors. each shift consist of 8 hours of operation). Undoubtedly. The six attributes must be normalized and have same order to be used in GA. the 36 scenarios are executed for 2000 hours (250 working days. In simulation experiments. Then. is considered as a 64-bit chromosome. average operator and machine utilization and number of completed parts per annum. each of the received demand’s parts has a special type and level. our fitness function is a multivariate combination that its most prominent components are total distance and variance. 44 . Step 3: Define the production module. 2 or 3 shift per day. After deletion of transient state. Moreover. outputs collected from simulation model are the average lead time of demand. The main structure of GA in this study is formed based on the assumption which describes that the best scenario among them could be a scenario with the indices in which each is the maximum value of its possible values. The basic concept in tournament is that the best string in the population will win both its tournaments. a flexible simulation model is built by Visual SLAM which incorporates all 36 scenarios for quick response and results. The results of the simulation experiment are used to compare the efficiency of the alternatives. the second scenario with the first and the next with upper ranked scenario are calculated respectively. Step 5: Define evaluation module: The fitness function to determine the goodness of each individual based on the objectives is defined by total distance and variance that can be shown by: 36 scenarios are selected as the core of our study. each day composed of 3 shifts. So in the developed model different demand levels and part types have taken into consideration. The objective of scenarios consists of reducing the number of operators in the cell is observing how the operation is distributed among the operators. the average of waiting time of demand. Furthermore. In fact. This module is defined to create and manipulate the 50-individual population by filling it with randomly generated individuals. • Mutation operator consists of making (usually small) alterations to the values of one or more Genes in a chromosome • Regeneration operator which is used to create 100individual generations. To achieve the appropriate rank (array). Therefore. ƒ The machines have no downtime for the simulated time. ƒ The sum of the multi function operation times for each operator is approximately equal ƒ There isn’t any buffer for the station work. the other kinds of selection methods named sigma scaling and rank selection are considered in order to best determine method. – . Each scenario also replicated 30 runs to ensure reasonable estimates of means of all outputs could be obtained. every possible array. the cycle time of the bottleneck resource is chosen close as possible as to the cycle time of the operator. Each individual is defined by 64-bit string. the best sequence of the scenarios is an array which has the minimum total distance with high internal cohesion among its scenarios. IV. The machines are all close to each other. while the worst will never win. APPLICATION OF GA MODEL composed 36 scenarios. Each labor assignment scenario considers 3 shifts which 1. This is considered as a popular type of selection methods in GA. The Table I shows Output of the simulation model.

7256 F 112. In this GA approach our goal is to obtain total ranking in all scenarios that their fitness function is the least distance between scenarios.24957 . related to the chromosome which can be shown by the sequence.000 .9099 2.4907 GA DEA DEA PCA GA PCA PCA GA DEA * The mean difference is significant at the .40335(*) .8809 -1.000 . First. it is tested whether efficiencies have the same behavior in GA ( ).40335(*) 3.3044 0.000 .8147 Adj Mean Square 81. but also the solution between these two point are important for us. indicates total distance between adjacent scenarios [9]. It is concluded hypothesis Ho: that the three treatments differ at .0 8 3.8968 3. The advantage of the GA model with respect to efficiencies is shown in Table II.000 95% Confidence Interval Upper Bound 1. RESULTS We have inspired the GA approach from the TOPSIS methodology.24957 .8809 -1. the least significant difference (LSD) method is used to . Furthermore.24957 . III. TABLE II. After producing 1000 generations we reach the best fitness function value.VI.18 400. VII. As we know the TOPSIS methodology is based on minimum distance from best scenarios and maximum distance from the worst alternative. The evaluating operator assesses the ability of each chromosome to satisfy the objective.24957 .4776 -2. Furthermore. V.362 rank 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 ANOVA is used to evaluate the effects of the optimum operator assignment in CMS model. That is compare the pairs of treatment means for all i .24957 Sig.65 76. SIGNIFICANCE In mentioned formula.000 . at Table I. MULTIPLE COMPARISONS (I) VAR (J) VAR Mean Difference (I-J) Lower Bound 2.8968 .38752(*) -. PCA ( ) models. The results of LSD revealed that at Ho: and and hence treatment 1 (GA) produces a significantly greater efficiencies than other treatments.0 0. 45 . our motivation to obtain the best array in this problem is to minimize the fitness function mentioned above. DEA ( ). DMU’s efficiencies are considered for GA model in comparison with DEA and PCA methods.05 level. GA SOLUTION scenario 31 35 25 34 28 10 26 19 13 22 7 23 36 32 16 29 14 1 Score rank 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Scenario 8 27 30 24 33 20 11 4 17 21 15 5 12 2 18 9 3 6 295.18 P 0.362. ANALYSIS OF VARIANCE (ANOVA) According to results.0 TABLE III.98417(*) -3. We are not looking for the best and the worst solution. Therefore.38752(*) -2. it is tested whether the null is accepted.24957 . The third scenario is the 9-1 that five operators (one operator to four machines and one operator for each of others) with the one shift per day. As you can see in the following Table our GA method satisfies this goal. ANOVA Source block treatment error Total DF 3 35 105 143 Sum of Squares 243. assigned to the system is the most efficient.324 2.972 80.98417(*) Std. Lower Bound . Error Upper Bound .8941 -2.4907 -3.8941 -.9099 1. the scenario 12-2 that one by one operator to four bye four machines with two shifts for day. TABLE I. The second best scenario is the scenario 12-1 that similar to scenari12-2 but the shift per day is one.000 . this is the difference this approach and current methods. 295.4776 Lower Bound 2.

2006 V. in fact. no. Savadogo. B.5.Calgary. M. U. Anvari. pp. Azadeh. John Wiley. GA approach able to ranking the alternative by near optimum fitness function . no. 571–591. Canada [3] [4] [5] [6] [7] [8] [9] V. Wemmerlov. ”Practical genetic algorithm”. pp. 800– 810. John Wiley and system publishing Corporation. Fourth Edition.2006. “The use of data envelopment analysis for technology selection”. the two Scenarios. Ebrahimipour. 2005. Computers and Industrial Engineering.L. to rank the operator assignment problem based on the attribute discussed in this paper. O. this is the main difference with the previous research [1. 1095–1104. Khouja. J. H. 1205–1215. Steudel. vol. 2007. “A study of labour assignment flexibility in cellular manufacturing systems”. K. Suzuki. 2. M. European Journal of Operational Research. pp. “On the relative performance of functional and cellular layouts—an analysis of the model-based comparative studies literature”. K. 46 .J. REFERENCES [1] A. Journal of Power Sources. Johnson. “Implementation of Multivariate Methods as Decision Making Models for Optimization of Operator Allocation by Computer Simulation in CMS.” A GA–PCA approach for power sector performance ranking based on machine productivity” .Azadeh.Haupt.48.4. Shanian. vol. Applied Mathematics and Computation. 1995 A. 123–132. For future effort proposed the hybrid simulation which GA and simulation software operate simultaneous and select useful scenario. 164. we used GA.the GA method based on the minimum distance between order of the scenarios. 2005 T.I. Ruan .3. 2. determining the best order of scenarios important rather than just best and worst solution. A. for us. as a powerful method. second edition. D. 31 and 6 are determined as the best and the worst respectively. pp. S. 28. vol.Haupt. no. Computers & Industrial Engineering. O. 1995. In this study. 1996. Cesani.E. “TOPSIS multiple-criteria decision support analysis for material selection of metallic bipolar plates for polymer electrolyte fuel cell”. in fact. Pritsker. vol. “Introduction to simulation and SLAM II”.VIII. vol. CONCLUSION [2] In this study. 1998 D.” In proceeding of the 2006 summer computer simulation conference. pp. As shown in Table I. 3. 186.” Data envelopment analysis based decision model for optimal operator allocation in CMS”. 309– 334. Production and Operations Management. Furthermore. GA determine the best solution by ranking with minimum distance. 159. Rezaie.Ertay. and 6]. vol. R.

Service matching. RELATED WORK Existing approach for Web Service Composition formulate the problem in different ways. By using the insight that service providers are often able to guarantee that the QoS items will persist for some amount of time [2. after which they may vary. To complete a task we may design a web process using web service composite method [5. In [4]. The web process will be recomposed after T times. target change or service unavailable. Keywords-Web Expiration times service adaption. As an example. depending mainly on how to select the optimal web services to meet the user’ requirements. China Abstract—A key challenge in composite web processes is that the quality of participating services changes during the life time of the process.In [3]. It is time consuming and ignores the fact that different service providers may provide “same skill” [11] web services.cn HU Dan Airforce Engineering University.7. the value of changed information with expiration times (VOC) and Query-Cost are used to judge if it is deserved to choose a new plan when the service parameter changes.2009.1109/ICCET. This method can assure to select the best plan and reduce the computation times. we present an approach of discovering the “same skill” web services to replace the expired one by using the service expiration times. It is necessary to find an available equivalent service to substitute the failed component which has been made unavailable. When the web process executes.com.00 © 2009 IEEE DOI 10. we considere the “task” as a capability offered by a web service. In this paper. There are a number of web services (WS) for giving information about a certain target. However. In this context. The replacement service should have the “same skill” with the failed one i.4]. To address the changes of web services. However. INTRODUCTION In volatile environments[2] where the Quality of Service (QoS) items such as deadlines. [12] offers a solution by using Bayesian learning. CHEN Honghui Department of Information System and Management National University of Defense Technology Changsha 410073. the event will be triggered to adapt the workflow. the previous selected web service may not satisfy the designer’s requirement because of the unstable communication link.the authors I. these web process systems are currently ill-suited for dealing with exceptions effectively. 4] use the insight that service providers are often able to guarantee that their reliability rates and other qualityof-service parameters will remain fixed for some time exp . because the characteristics of the service providers who participate in a web process may change during the life-cycle of a process [11]. the paper presents a process adaption framework and algorithm that can support dynamic service selection of a failed task. China yxhtgxx@yahoo. to have same functionality and QoS. the composite web process may become invalidate if it not update with the changes.2009 International Conference on Computer Engineering and Technology Dynamic Adaption in Composite Web Services Using Expiration Times YU Xiaohao. III. Doshi et al. DEALING WITH WEB PROCESS USING EXPIRATION TIMES π ∗ . When a change occurs in the environment. we suppose that the failed web services may be substituted by a new one identified in UDDI registry center. and the approach may bring on plan recomputations that can’t bring about any change to the failed Web process. Xi’an Shaanxi 710077. process to be unchanged for T times.146 . II. instead of finding the optimal policy and calculate the query-cost. While this method proposes a usually used basis for performing contingency actions and has been implement in many BPEL execution engine. quality of products. but it also need to recompose web services when compute the expected cost of following the optimal policy. [2] provide a T-correct algorithm to ensure the web t 978-0-7695-3521-0/09 $25. In these cases. A straightforward approach to solve the problem is to try a different process for achieving the same goal. In this paper different from the previous method based on recompositing the web process. use pre-defined event-conditionaction rules as a workflow adaptation strategy. While this is similar in concept to our approach. WS providers may define exp in a WSAgreement document. LUO Xueshan.10]. [2. researchers are increasingly turning their attention to managing processes in volatile environments.e. the Bayesian learning method does not have any advantage in the time consuming of updating the parameters. consider an information fusion application. this method also has limit that in unpredictable conditions or complex workflows it can’t list all possible actions and preconditions that may arise. [11] offers an architecture of RelevanceDriven Exception Resolution but there is no algorithm of how to select the “same skills” services. It is time consuming and may lead to frequent unnecessary computations. Au et al. and cost of services may change frequently. we take an 47 t Recently.

Definition We assume that Task1 is already functioning in BPEL execution. ES (in line 14). we must anticipate this and add those services in advance to the set. The algorithm will select the “same skill” web service to replace the expired one (line 4-17). QoS and so on) to the CSM module which will update the database of replacement policy. whereas the input/output matching includes higher-level semantics. domain ontologies or do-main taxonomies can be employed [6. the sequence of process performed are presented below step by step. These two rules which make the Input set of Task1 equal or contain Task2’s and also make the Output set of Task2 equal or contain Task1’s semantically to ensure that Task2 adapt to the web process context of the failed web service Task1. they have the same Input/Output attributes. Rule 2: for each o ∈ Otask1 there exist an o ' ∈ O task2 and o ⊇ o ' . task 1 and task 2 are the input set of Task1 O O and Task2. the framework is composed of 5 modules: Service Monitoring Module (SM). Rule 1: for each i ' ∈ I task2 there exist an i ∈ I task 1 and i ' ⊇ i . it sends a signal/message tagged by the failed component id to EE module to stop the execution. Using this message in step 4. Adaption Framework Fig. Note that descriptions of inputs and outputs go beyond the specifications employed in a WSDL specification. In line ∗ 14. we set out from the following two rules to define a Input/Output semantic match. Notice that a web service might expire while query the “same skill” web service. execution cost. A. “maximizereputation” and so forth. I I In these two rules. i.e. In the algorithm the times t tresponse (In line tquery is the times spend to 48 . Composition service Management Module (CSM) and Execution Engine Module C. To formally express these semantics. the FR module decides the execution path and selects the required web service in UDDI to replace the failed one.architecture and an algorithm to find “same skill” services in order to replace the one that expired in the composite process flow [8]. Motivated by functional match and current concept similarity techniques [1]. here i ' ⊇ i indicate that i = i ' or i is the subclass of i ' (EE). In step 3 the CSM module returns the replacement web service characters. If Task1 expires then we will choose Task2 to replace it. since the WSDL is machineoriented types. In line 12. When there is an exception. here o ⊇ o ' indicate that o = o' or o ' is the subclass of o Fig. the times query is added to the other executing services. In line 11. In the Figure.9]. Two providers of same task supporting the same generic Input and Output may have different values of QoS. during which other services may expire. “minimize-cost”. To keep the “default” not changing with time. To find “same skill” service Task2. The EE module executes the new web process in step 5. the operation names can different but the input and output must semantic equivalent. Consider two web services that are functional equivalent. task 1 and task 2 are the output set of Task1 and Task2. it will be updated only when a component failed at the first time. reputation. suppose Task1 which has response availability A1 and cost C1 has already functioned in BPEL execution. the t[s ] is set zero. These kinds of QoS properties have been emphasized on previous research works [5]. each service’s execution times will be added by 18-20). When a web service expired and the BPEL orchestrator decides to resolve it by finding and replacing the task that stopped the web process. Also. we show the algorithm for adapting the web process in volatile environment. if the following condition is true Task2 can replace Task1 in the web process: (Input/Output matching) AND (A2 ≥ A1) AND (C1≥C2) B.1 shows the proposed service selection framework which is designed to support the dynamic adaption in composite web services using expiration times. frequency. In the web service replacement. Algorithm In Fig. In step 2 FD module analyses causations of the exception and then send the failed service’s characters (Input and Output parameter. ∗ because the service s is the newly selected service. Here the replacement policy can be predefined policies or “default”[11] (all attribute values of the replacement task need not only to be the Input/Output matching but also QoS “better” than the corresponding attributes of the failed task). such as availability. successful execution rate. then when Task1 is failed to complete. In step1 when SM module detects a BPEL execution exception. The parameter T is * designated by the police. Failed diagnosing Module (FD). the algorithm invokes the procedure which finds the service s that can assure T times fixed to replace the expired one. If there is no expired services and the web process runs successfully. Failed Recovery Module (FR). we define two matching rules as follows: (1)Input/output matching: Inputs and outputs are functional attributes of a web service. 1: Service Replacement Framework (2)QoS matching: There is a set of metrics that measure the Quality of Service. 2.

We compare the successful executing times with the method based on process recomposition. (b) The expiration times change Fig. V. 2: Service Replacement algorithm IV. 4(b). t recomposition with the component increasing. To reuse the available web services in a web process and increase it’s execution times. EXPERIMENT In this section we simulated our algorithm using a web process scenerio with several web service components in Fig. CONCLUSION AND FUTURE WORK Fig. In Fig. In our experiment. It can be seen that the average execution times of the web process using our approach are longer than the method based on process In this paper we deal with dynamic adaption of web process in volatile environments. However the recomposition method may be more efficiency. (a) The components change Fig. Under this condition we can avoid the time of finding the failed one by modify the Policy. First we define the two conditions that must be satisfied when we select the “same skill” web services. 3: Experiment Scenerio The result is shown in Fig.find the “same skill” web service and the times response is the times spend during the web process successfully executing. We change some web service description profiles such as giving each service an expiration times randomly from 1s to 20s. Our future work is to abstract the Business Process Language (BPEL) to make the process language support the dynamic adaption of the web process. That is because the shorter are the web services’ expiration times the longer times that spend on finding replacements or recompositing a web process. For each component we randomly created some relevant web services by using the OWLS_TC V26. we propose an intelligently web process adapting method by finding the failed services’ replacement. That is because the recomposition of a web process will consume more and more time when the component number of a web process increases. when there is no satisfied web service for the failed component. we randomly selected a web service for each component as an initialization. 4(a). Notice that the successful execution times of a web process increase with the expiration times increasing. we compare the runtimes taken in effectively executing the web process when we increase the average web services expiration times. We will also study the storage strategy of “same skill” web services to accelerate the searching times in UDDI registry center. 4: Successful execution times of a web process using different method In the experiment we can find that out approach is excellent in increasing the execution times of a web process when there are replacements of the failed web services. 3. 49 . Then we present the process and algorithm of our approach to deal with the expired web service. building the ontology relationship between web services’ interface description.

M.2005.REFERENCES [1] A. [12] P. Muller. K.. Doshi. T.. 50 . S. [11] Kareliotis Christos.200-215. and E. [10] D. P.12-16. 2008. Confon Autonomous Agents and Multiagent System . Ilghami. Kuter. In International Semantic Web Conference. [9] OWL services Coalition: OWL-S: Semantic markup for web services(2004) OWL-S White Paper http://www. Georgiadis Panayiotis.: Dynamic workflow composition using markov decision process. Proceedings of the 1st International Conference on Web Services (2003) [7] A. Akkiraju. W.May. International World Wide Web Conference Committee May 8-12.org/services. Au. J. Boualem Benatallah. Journal of Data and Knowledge Engineering. 1023-1032(WWW 2007). Murdock. International World Wide Web Conference Committee. M. May 20-24. D.: Aspect-Oriented Web Service Composition with AO4BPEL. Nau.: Toward Dynamic Relevance-Driven Exception Resolution in Composite Web Services. Sivashanmugam. Ossowski: Exploiting Organisation Information for Service Coordination in Multiagent Systems: Proc. Doshi. C.C. Rahm. Proceedings of the European Conference on Web Services (2004) [8] P. Of 7th int. 5266. Harney. 51(2):223-256. JAIR 20(2003)379-404. Mezini. 2007. and K. S. 2004. Dumas. Spriner-Verlag Berlin Herdelberg.daml. Quality Driven Web Service Composition..owl-s/1. T. U. Web service composition with volatile information. R.pdf.1/owl-s. Goodwin..: SHOP@: An HTN planning system. Speeding up Adaption of Web Service Compositions Using Expiration Times. Fernandez.: Adding semantics to web services standards. and D. Vassilakis Costas. Au. Nau. Wu. Liangzhao Zeng. 2(1):1-17. Journal of Web Services Research(JWSR). 257264(AAMAS 2008). A. Geriner. U.. [6] [2] [3] [4] [5] K. Verma. Verma. 2005. Sheth..: Analusis of Web Services Composition Language: the Case of BPEL4WS. R. 2003(WWW 2003). Agentwork: a workflow system supporting rule-based workflow adaption. Wohed. O. Charfi. R.

Personalized and Adaptive to diminish network’s load. As Wu Qinglin [4] said. In a word. qxj7711@163. objective function algorithm) by means of information science. Thus we can establish a harmonious environment for human and computer’s interaction. While for Mobile Agent paradigm. Therefore we shall consider the cost of the network. Different participants use message-passing to coordinate distributed computation. It also has its particular characteristics of Mobility. Aglet is the shorthand for agent plus applet. and improve the system’s robustness and fault tolerance. So E-learning systems should have emotional intelligence to meet the need. The virtual teacher can respond to student’s current state to regulate their emotion to optimum by words or expressions. Moreover. A dimensional model is put forward to recognize and analyze the student’s emotion state and a virtual teacher’s avatar is offered to regulate student’s learning psychology with consideration of teaching style based on his personality trait. Keywords-emotional intelligent. learning abilities and needs of each individual student. In this project.1109/ICCET. decision-making. the whole system’s basic architecture and technological support. to process largely emotional computation on the web. In our project. Autonomous. we will take the student’s emotional state into account by analyzing his learning psychology or motivation. A “man-to-man” learning environment is built to simulate the traditional classroom’s pedagogy in the system.cn. they exert influences in various behavioral and cognitive process.edu. This will largely reduce the data flow in the Internet. While Current E-learning system is referred to as cognitive education. yinggangxie@163. . Xiangjie Qiao. In this paper. we construct an emotional intelligent e-learning system based on mobile agent technology. THE ARCHITECTURE BASED ON MOBILE AGENT The system’s architecture is as shown in Figure1.ustb. nowadays learning systems are lack of emotional interaction in the context of instruction. we construct a virtual teacher agent to communicate with students in the process of learning. adapt to a new environment dynamically. with an emphasis upon “man-toman” instruction. computation is migrated toward resources. system would occupy the bandwidth frequently.com Abstract—The emergence of the concept “Affective Computing” makes the computer’s intelligence no longer be a pure cognitive one.2009 International Conference on Computer Engineering and Technology An Emotional Intelligent E-learning System Based on Mobile Agent Technology Zhiliang Wang. I. their self-efficacy toward using them reduces. etc[6][7]. functionality of applications is partitioned among participating nodes. we try to experiment with such paradigm---Aglet. It provides us an infrastructure for building distributed applications based on mobile agent technology. such as attention.com. virtual teacher two-dimension model. It offers the theoretical and modeling basis for realizing the emotional intelligence in e-learning systems. an effective individualized learning system should be not only intelligent but also emotional.2009.17 51 individuals with more positive affect exert more effort on computer-related tasks. China wzl@ies. Next we will introduce every server and component’s functionality and role in the system. The Artificial Psychology theory put forward by Professor Zhiliang Wang [1] proposes imitating human psychology activities with artificial machines (computers. This research also emphasized that 978-0-7695-3521-0/09 $25. They cannot simulate the traditional classroom scenario: The teacher and students can face to each other in the class. Computation itself is partitioned. Researchers in neurosciences and psychology have found that emotions are widely related to cognition. INTRODUCTION Learning is one of the cognitive process affected by one’s emotional state [3]. mobile agent. participants exchange intermediate results and other synchronization information. an educational psychologist. In this paper. which then reduces their chances of performing computerrelated tasks well compared to those with positive attitudes towards computers. conquer the network’s latency. Current e-learning systems cannot instruct students effectively since they do not consider the emotional state in the context of instruction. they are still not as effective as one-on-one human tutoring. Rozell and Gardner’s [8] study pointed out that when people have negative attitudes towards computers. Though Intelligent Tutoring Systems (ITS) can provide individualized instruction by being able to adapt to the knowledge. II. Traditional paradigms for building distributed applications are RPC and most recently its object cousins RMI and CORBA. For this class of paradigms. long-term memorizing.00 © 2009 IEEE DOI 10. It’s easy for us to implement the information’s gathering and retrieval by utilizing its mobility property. Yinggang Xie School of Information Engineering University of Science & Technology Beijing Beijing.

he should firstly register his information and wait for the system’s verification. if the student is interested in the instruction he will sit more nearer the computer to care the learning material. When a student enters our system. In our research. Attention level is to detect whether the student is learning carefully enough [13] (see Equation (2)). Information Gathering Agent (IGA) will start to gather pedagogical information and provide useful pedagogical tactics for VTA through data mining mechanism. When students want to have classes with our system. By analyzing the facial expression. VTA tell the ESEC to create a Mobile Query Agent (MQA) and dispatch it to every campus aglet server that has registered in the ESEC and the query result will then be fetched and arranged well to the students. General speaking. The second is “Question and Answer” and the third is “Online Test”. his pupil will become much bigger than usual. This non-verbal communication is completely lost when people communicate with computers [11]. Here. Student Server’s Creation). Listening to the student’s changing state especially emotional state. Classroom Server. it is often useful for us to try to gain insight into “invisible” human emotions and thoughts by interpreting these non-verbal signals. Here. the pi is current interest level. We often try to estimate a person’s internal state by observing facial expressions. Here. we utilize a two-dimension model to describe a student’s emotion. monitor other servers’ activity. a visible change in heartbeat. Here the VTA shall do four things in the context of instruction. the minimum face area can be detected. STUDENT’S LEARNING PSYCHOLOGY MODEL Figure 1. Regulating the student’s state by doing some activities such as words expression or facial expression. If the student is the first time to use our system. Second. These three functions are all implemented based on mobile agent technology. analyzing the student’s emotional and cognitive data to decide next action. Interest level is mainly depended on the distance between student and computer. the ESEC will create Teacher Agent and send it to the Teacher Agent Server. voice inflections and even eye and other body movements. At the same time. the information about the courses and teachers will be offered. Teachers use these non-verbal signals to make judgments about the state of students to improve the accuracy and effectiveness of interactions. That is interest level and attention level. 0 x ≤ xF min ⎧ 1 ⎪ ⎪⎛ x − xF min ⎞2 ⎪ ⎟ x ≤x≤x Ei = ⎨⎜ F min F max ⎜ x − x ⎟ ⎪⎝ F max F min ⎠ ⎪ 1 xF max ≤ x ⎪ ⎩ (1) If the student pays much attention to the instruction. Each student will match one SA respectively. the system will firstly provide three services in a web page for him to choose as he likes. System’s architecture The E-learning Service Center Server (ESCS) is in charge of the general management of the system. To interact most effectively. the x is the space between eyelids. the xF max is the maximum face area can be detected. After the student passes the verification. One is “Class”. Third. Receiving student’s question and return the answer to the student.III. the xe max is the maximum of averaged pupil size and the xe min is the minimum of averaged eyelids’ distance. It will initialize the entire environment (e. which is provided for students to have classes. the C is the student’s 52 . and the x F min is the Interest level. Fourth. he can start his class with a virtual teacher say hello to him. provide registration service and cooperate with other agent systems. 0 ⎧ 1 ⎪ ⎛ x ⎞2 ⎪ ⎪⎜ p − xe min ⎟ ⎪ ⎟ Ea = ⎨⎜ i ⎜ xe max − xe min ⎟ ⎪⎜ ⎟ ⎪⎝ ⎠ ⎪ 1 ⎪ ⎩ x ≤ xe min xe min ≤ x ≤ xe max xe max ≤ x (2) Finally we get the general learning psychology of the student by Equation (3). On the other hand. Here. we can get the student’s learning psychology.g. the E a is the Attention level. the Ei is the x is the face area. the ESCS will send Student Server Management Agent (SSMA) to create the Student Agent (SA) and manage other agents in Student Agent Server. the interest level can be achieved [13] (See Equation (1)). the emotion of a student is mainly recognized by facial expression captured by a camera. And the Classroom Server will receive the Mobile Student Agent’s request to dispatch a Virtual Teacher Agent (VTA) to begin a class. By computing the student’s face area obtained by the camera. First.

y s . the change of teaching style can be obtained by (6). pride and sad. External stimuli are the student’s new emotional states. yl . hope and disappointment (see Figure 2). “Perhaps you would like to…” or “Maybe you could…” etc. we adopt the OCEAN model [12] to map the teaching style (see Table 1). Ei . E a (3) P = αEi + β Ea + γC 2 (α + β + γ = 1) 2 2 of one emotion x changes to another emotion y. negative affects can block thought processes [9]. three couple (six) emotions are employed in the model: joy and angry. zl ] θ ab θ ac ⎤ θ bb θ bc ⎥ ⎥ θ cb θ cc ⎥ ⎦ (7) Figure 2. Suppose the probability of changing Emotion is Ps (see (4)). it is made up of knowledge and personality. Wb . TM . Finally we esn +1 by equation (7). rigorous teacher may express as “You should…”. As every person has his own personality. VIRTUAL TEACHER’S EMOTION MODEL personality factor O for the joy emotion. BM > The virtual teacher will listen to the student’s learning state through VTA by Sense Module. γ is the weight of and C respectively. The p xy means the probability ⎡ p xx ⎢ p s = ⎢ p yz ⎢ p zx ⎣ n s p xy p yy p zy p xz ⎤ ⎥ p yz ⎥ p zz ⎥ ⎦ p xy p yy p zy (4) ⎡ p xx ⎢ Δes = e p s = [x s . θ ep defines a weight of personality factor p for emotion e. some teachers are rigorous while some are easy-going with his students. Thinking Module is also a control module. α. For example. the percentage of each dimension of personality. while the easy-going teacher will say “I suggest that you…”. θ joyO is the weight of IV. The change of the virtual teacher’s emotional state depends on two factors: external stimuli and teaching style.edu/~perlin/). TESTBED We have constructed a prototype and developed some agent modules to test the effectiveness of the architecture.cognitive evaluation value. For each dimension of personality according to the OCEAN model. The virtual teacher’s avatar is revised from the Ken Perlin’s applet program (http://mrl. Knowledge is just as the regulation rule. For example. β. It is an illusion thinking that learning environments that don't consider motivational and emotional factors are adequate. Reciprocally. OCEAN Openness Conscientiousness Extraversion Agreeableness Neuroticism Wi denotes Positive affects are fundamental in cognitive organization and thought processes. when the teacher wants to give the student suggestion. OCEAN Teaching Style Active Responsible Energetic Easy-going Rigorous STB =< SM . On the other hand. “let’s…”. For instance. we construct a teacher’s avatar as a virtual teacher to analyze student’s emotion state and give proper regulation to adjust their negative emotion. Thinking Module (TM) and Behavior Module (BM). it depends on the psychological and pedagogical theory and experience. z s ]⎢ p yz ⎢ p zx ⎣ ⎡θ aa Δ p = [Wa .nyu. we take the virtual teacher’s teaching style as his personal trait. Since there is not a real teacher or a real classroom in an e-learning environment. Once the SM senses the changing it will inform the TM to analyze the student’s state and decide how to regulate the student’s state by doing some activities like words or facial expression. Various emotions for the virtual teacher V. The virtual teacher’s emotional model contains three modules: Sense Module (SM). TEACHING STYLE VS. they also play an important role to improve creativity and flexibility in problem solving [8]. and then the change of the emotion will be (see (5)). Wc ]⎢θ ba ⎢ ⎢θ ca ⎣ p xz ⎤ ⎥ p yz ⎥ (5) p zz ⎥ ⎦ (6) esn+1 = esn + Δes + Δp = esn + esn ps + Δp = [xl . Meanwhile. And the words and phrases will be different according to their teaching style as well. To effectively regulate the student’s state. 53 . And can get the new emotion TABLE I.

Shanghai: East China Normal University Press. A. C. 30. 1463-1474. REFERENCES [1] Wang Zhiliang. G. Reed. Idzihowski. (4): 5-13. Proceedings of the 3rd IEEE International Conference on Advanced Learning Technologies. Artificial Psychology-a most Accessible Science Research to Human Brain. Hosseini. Normal personality assessment in clinical practice: The NEO personality inventory.16 (2000). The system’s functionalities include: (1) Students can quickly find the course-related information and the data share is well implemented.5s [12] [13] 54 . Bayonne: Deuxième journée Acteurs. 478-481. Ergonomics. British Journal of Psychiatry. W. Damasio. Pedagogical Psychology---to Pedagogue. 2003. The Journal of Computers in Human Behavior. and affective processes associated with computer-relatedperformance: a path analysis.6s 21. 2000. Lijuan Wang. NY. E. ACKNOWLEDGMENT This paper is supported by the National Natural Science Foundation of China Grant #60573059 and Natural Science Foundation of Beijing Grant #KZ200810028016. Yinggang Xie. Gardner.5s 13.R. S. In the future work. Sarrafzadeh. Table 2 shows the time of feedback when query all the sub agent systems. 1997 D. ISAI’062006-08.J.3s 22.P.2s 14. 184-185. New York: Bantam Books. VI. 2000. A. R. 22 (5). C. Cognitive. (2) Students can communicate with the real teacher or the virtual teacher emotionally. Fear and performance in novice parachutists. Baddeley. we will revise the details of our system and find some school students to test the system’s performance. Using cognitive Agents for Building Pedagogical Strategies in a Multistrategic Intelligent Tutoring System.5s time for obtaining the first data 13. 1977. Obsessional cognition: performance on two numerical tasks. motivation. Journal of University of Science and Technology Beijing.L. Xiangjie Qiao.Figure 3 (a) shows the classroom server’s management interface and (b) is the student’s learning window. Isen. Rozell. Affective Computing. subsystem’s number 2 3 4 5 TESTING RESULT time for obtaining all data 15.G. we have constructed an emotional intelligent e-learning system based on mobile agent technology. (a) Classroom server’s management window [2] [3] [4] [5] [6] [7] (b) Student’s learning window Figure 3. CONCLUSION AND FUTURE WORK In this paper. 130. Emotional Intelligence. A.2003. F. TABLE II. (2000). Handbook of Emotions. Affective Recognition Based on Approach-Withdrawal and Careness. Students can ask to the teacher whenever they are confused with the instruction and answer the questions forwarded by the virtual teacher as well. M. 199–222.7s 17. Agents et Apprentissage. pages 336337. Facial expression analysis for estimating learner’s emotional state in intelligent tutoring systems. The virtual teacher will respond to the student’s emotion state through verbal or non-verbal expressions. MA: MIT.. Fan. (1994) Descartes Error – Emotion. Psychological Assessment.1992.T. we have carried out the incremental testing to test the system’s scalability. Reason and the Human Brain. P. C. Costa and R. W. H.7s 14. Positive Affect and Decision Making. A. Zhiliang Wang. Frasson. Overmyer. 1995. Wu Qinglin. 1987. Picard. Goleman. (3) Students can do online testing to test themselves independently. McCrae. System’s interface [8] [9] [10] [11] In addition. An Ping. 1998. Putnam Press.

Soon School of Electrical and Electronic Engineering. Y. 639798 Abstract—This paper focuses mainly on using chaos encryption to protect AVS audio efficiently. covert communication.1109/ICCET. The digital watermarking scheme is able to provide a feasible and effective audio copyright protection. especially preventing the watermarking from being removed by compression. insert (key words) I. ownership verification. As a single seed will reproduce the same sequence of numbers each time the generating function is iterated. Lei School of Electrical and Electronic Engineering. the reader may consult [5].196 55 schemes mainly focus on image and video copyright protection [1-3]. Pseudorandom sequences have an advantage in that they can be easily generated and recreated. One popular choice was the higher frequency region [4]. to our knowledge. but currently digital watermarking 978-0-7695-3521-0/09 $25. Singapore. Nanyang Technological University. styling. resampling. INTRODUCTION Digital audio watermarking embeds inaudible information into digital audio data for the purposes of copyright protection. The majority of watermarking schemes proposed to date use watermarking generated from pseudorandom number sequences. compression. Although the audio watermarking methods described above have their own features and properties. audio watermarking methods have.00 © 2009 IEEE DOI 10.2009 International Conference on Computer Engineering and Technology Audio Watermarking for DRM based on Chaotic Map B. Early work on audio watermarking embedding achieved inaudibility by placing watermarking signals in perceptually insignificant regions. A meaningful gray image embedded into digital compressed audio data is researched by quantizing MDCT coefficients (using integer lifting MDCT) of audio samples based on chaotic map. Keywords-component. The experimental results show that the proposed digital watermarking approach is robust to the audio degradations and distortions such as noise adding. a single seed (along with an initial value) will always reproduce the same sequence. Chaotic functions have been used to generate watermarking sequences [6-8]. Audio Video coding Standard (AVS) is China's second generation source coding\decoding standard with fully Intellectual Properties. A novel digital watermarking approach is introduced for audio copyright protection. Similar to the pseudorandom number sequence. not been studied much. low pass filtering. However. the Fourier transform magnitude coefficients are replaced with the watermarking sequence. Audio content protection also plays an important role in many digital media applications. The proposed watermarking algorithm can extract the watermarking image without the help from the original digital audio signal. watermarking embedding is performed during vector quantization.sg I. AVS DRM aims to offer the universal and open interoperable standard for various DRM requirements in digital media industry. formatting. 639798 leib0001@ntu. In some systems. With the widespread infusion of digital technologies and the ensuing ease of digital content transport over the Internet. The watermarking is embedded by changing the selected code vector or the distortion weighting factor used in the searching process. Some previous works on MP3 are always based on the frequency-selective method that selects different frequency coefficients to be encrypted. Digital Rights Management (DRM) of multimedia data have therefore become of a critical concern. they share some common problems as follows: .edu. For a review of the early watermarking schemes and main requirements of a watermarking scheme. style. As the sixth part of AVS standard. Another trend in digital audio watermarking is to combine watermarking embedding with the compression or modulation process.2009. Digital watermarking technique has received a great deal of attention recently in industry and academic community. and/or auxiliary data carrying. Singapore. In one scheme. The integration could minimize unfavorable mutual interference between watermarking and compression. Nanyang Technological University. and re-quantization.

For example. we should do some dimension reduction operation if we want to embed it into audio Ai . Section 2 describes the proposed watermarking generation. k i * M 2  j} (5) 56 . The two-dimension image is converted to one-dimension sequence with equation (5). BINARY IMAGE WATERMARKING EMBEDDING AND EXTRACTION A. and a is embedding stretch factor.e.57  P d 4 . A chaotic function is unpredictable. This paper proceeds as follows. The chaotic sequence has the character which is sensitive to initial values and random alike. seeding the chaotic function will generate the same output (map) when certain constraints or initial conditions are placed Scrambling with chaotic sequence Watermarking embedding key Watermarking extraction Extracted watermark Figure 1. Our proposed embedding and extraction scheme can be seen in Figure 1. j ). the watermarking embedding and detection procedures are provided. The watermarking is generated using deterministic chaotic maps. where 0  P d 4 is bifurcation parameter. j ). A single variable. In Section 3. 3. The primary advantage is that it is possible to investigate the spectral properties of the resulting watermarking. because they utilize the floating point calculation. System Model There are three ways to embed the binary image as watermarking to audio A: (1) A 'i Ai  D xi A 'i Ai (1  D xi ) (2) A 'i Ai e axi (3) Where Ai is the audio transform coefficient. In this paper.1} . GENERATION OF WATERMARKING WITH CHAOTIC MAPS on the mapping. as it is a wellbehaved chaotic function which has been extensively studied. (3) Integer lifting MDCT to solve round-off error problems. (2) The detection procedure needs the original digital audio signals. j )  {0. (2) Blind detection without resorting to the original digital audio signal or any other side information. We adopt equation (2) as it can be adaptable to Ai change. In this study. indecomposable and yet contains regularity which is sensitive to initial conditions. for example. The simplest chaotic sequence is one dimensional logistic map which is unimodal and defined as xk 1 P xk (1  xk ). Proposed embedding and extraction scheme B. Original audio AVS encoder Compressed bitstream Watermarked audio AVS decoder The majority of watermarking generation schemes proposed to date use a pseudorandom number generator to create a watermarking sequence which is embedded in the cover work. III. the Bernoulli Map. The motivation for using a chaotic function to generate a watermarking is that a single variable seeding the chaotic function will always result in the same output (mapping) when certain constraints or initial conditions are placed on the mapping. for the Human Auditory System (HAS) is not taken into account adequately. Pre-processing of Image Watermarking If the watermarking is visually differentiable M 1 u M 2 binary image which can be represented as: W {w(i. we introduce a new adaptive digital audio watermarking algorithm based on the quantization of MDCT coefficients (Integer Lifting MDCT). the logistic map was selected. The use of chaotic functions for the generation of watermarking has been previously proposed [9-11]. 0 d i  M 1 . Conclusions are given in Section 5.1) . (3) The robustness and invisibility are not so good. which can be used to select embedding points randomly. 0 d j  M 2 . V {v(k ) w(i. The features of the proposed algorithm are as follows: (1) Embedded watermarking as meaningful gray image. II. Section 4 shows the experimental results and discussions. i.(l) The amount of hidden information is small. 0  j  M 2 } (4) where w(i. xk is in chaotic state. (4) Most of transform domain digital audio watermarkings have round-off error problems. Skew Tent Map and also logistic Map. 0 d i  M 1 . In practice. some of them can only embed pseudorandom bit sequence or binary image. xk  (0. when P is restricted within narrow limits. Because W is a two-dimension binary image.

(6) Embedding watermarking: Adaptively modify the medium coefficient De (k )(mw ) by sorting to embed v p (k ) . if the sum of the m segments is more than > m / 2@  1 . otherwise. C. m2 ) in the image W can be expressed with v (k). V p sort (V ) {v p (k ) v(k '). 0 d i  N } (9) (2) Watermarking dimension reduction: Because the image is two-dimension. low-power spectral density and higher confidentiality. But it is based on the increase of the signal capacity to improve the robustness. 0 d m2  M 2 . mk  i  m(k  1) ° ® ° De (k )(i)otherwise ¯ 13) where m is modulation factor. If a is too large. mk d l  m(k  1) a u De (k )(l ) (15) (4) Generate the chaotic sequence{r (l )} with key K. we adopt the linear recursive shifter to sort all the elements in the random sequences. (5) Choose the intermediate frequency coefficient: We choose the median coefficients in the MDCT domain to satisfy the digital audio signal. k '  M1 M 2 } (6) After sorting. 0  k . 0 d k  ( M 1 u M 2 ) vsp '(l ) s '(l ) † r (l ) 57 . a is stretching factor which can be changed to control the watermarking embedding intensity. D. If a is too small. and then do the Exclusive OR operation to get {vsp '(l )} : (16) (5) Reconstruct {vsp '(k )} from {vsp '(l )} . (2) MDCT: Do the MDCT of Ae and Ase . The value can be chosen according to the application. 0 d k . then use {r (l )} to modulate {v p (k )} and get {s(l )} with m Where De ( k ) {De (k )( m). (3) Pseudo random sequence sorting: To eliminate the correlation in the neighboring elements. modulation factor seen as follows. r (1)  {0. it is robust to the attack. Watermarking Extraction The watermarking extraction algorithm is expressed as follows: (1) Segmentation: Segment digital audio signal Ae . (7) Inverse MDCT: Do the inverse MDCT of D 'e A 'e (k ) IMDCT ( D 'e (k )). 0 d k  ( M 1 u M 2 ) (14) De '(k )(i ) (8) Replace A 'e with Ae to get the final watermarking signal. then watermarking {vsp '(k )} is 1. 0 d m  N } De (k )( m) is the kth audio data. (10) 0 d m1  M 1 . we modulate the watermarking sequence by chaotic signal and generate chaotic sequence {r (l )} with secret seed K.To eliminate the similarity in the neighboring elements and enhance the watermarking robustness. it is 0. m2 ). (6) Inverse sorting: The one-dimension sequence can be obtained by inverse sorting which is expressed as: After the dimension reduction. it should be converted to one dimension shown in equation (9) before it is embedded into audio signal. k '  ( M 1 u M 2 )} (11) (4) Apply MDCT: Do the MDCT to all the audio data 2) De (k ) MDCT ( Ae (k ). w(m1 . s (l ) v p ( «1/ m » k ) † m(1) ¬ ¼ (7) Chaotic modulation has the advantages of high antiinterference ability. it can reduce the use value of the signal and cause the hearing distortion.1}(0 d k  mM 1 M 2 ) ­ De (k )(i)(1  av p (k )). The detailed embedding process can be shown as follows: (1) Segmentation: Segment M 1 u M 2 data with length N from A and denoted as: Ae { Ae (k ). V p sort (V ) {v p (k ) v(k '). the embedding signal is too small to be perceptible. To further improve the watermarking anti-attack ability. V {v(k ) w(m1 . however. In this paper. k m1 u M 2  m2 } v p ( k ). (3) Extract watermarking: 1 s '(l ) ( Dse (k )(l )  De (k )(l )). which is weak in anti-interference. we employ MDCT transform to embedded watermarking W. we adopt the linear recursive shift register to sort all the elements in V to improve the robustness. Watermarking Embedding There is original audio signal with L sampling data and M 1 u M 2 binary image. 0 d k  ( M 1 u M 2 )} (8) Where Ae (k ) is expressed as: Ae (k ) {a (k u N  i ). the k ' elements in V will shift to K elements in the v p .

(6) Re-sampling: Up-sample the original audio to 58 . With right key. re-sampling attack. Ws {ws (i. the normalized correlation coefficient of the extracted watermarking is 1. the embedding audio watermarking is under a variety of attacks. Watermarking Attack Experiment and Analysis To remove the effect of observer’s experience.23dB. The initial value of chaotic sequence is 0. clipping attack. Watermarking security test ( Left: extracted image with correct key. However. e. j ) In this paper. the finally extracted watermarking image is shown in equation (18). the images are quite different which means we can not extract the watermarking without correct key. ws ) i 1 j 1 2 ¦ w (i.01.3800001. country and classic music. such as re-sampling. The compression ratio is 10:1 (2) Clipping: Cut 10% of the original audio. 0 d k . adding noise attack. A 64×64 binary image after chaotic encryption was embedded to the original audio. the watermarking can not be recovered correctly. The embedding and (1) AVS attack: The audio bitstream goes through the encoder and decoder. PSNR and normalized coefficient are adopted to objectively assess the original and extracted watermarking. original watermarking. The watermarking is extracted from the audio signal and computed the similarity of the original and embedded audio watermarking. j ) s M1 M 2 NC ( w. and g.0 d j  M 2 . right: extracted image without correct key) To test the robustness of the watermarking. the order is 6.(17) The watermarking is increased to 2-D dimension. (4) Median filter: 4 order median filter. if the seed=0. Extracted watermarkings after attacks (a.38. w ') 10 log10 max w '2 ( n) a b c d (19) e f g Figure 4. clipping. we use 22. therefore. Figure 2. re-quantization attack. 0 d i  M1 . Without any processing. popular. experiment and device conditions. The original and embedded watermarking audio is very similar visually and aurally which also can be seen from the waveforms. j )w (i.05 KHz sampling rate. water mark water water mark mark Figure 3. AVS compression attack) ¦¦ w(i. With only 0. noise and AVS compression attack. low pass filtering.0001 differences. and other objective and subjective effect.05 kHz. b. The attacks are realized as follows: B. (3) Low pass filter: Using Butterworth filter. PSNR 1 ¦ (w '(n)  w(n))2 N Normalized Coefficient (NC) PSNR ( w. j ) vs (k ). d. low pass filter attack.8 and P 2 . 8bit quantization resolution and the length of the mono channel audio signal is 30s. EXPERIMENT AND RESULTS Vs {vs ( k ) vsp ( k '). (5) Gaussian noise: Embedding Gaussian noise whose mean is 0 and variance is 0. The PSNR after embedding is 40. The audio signal contains the speech. seed=0. cutoff frequency is 22. k '  M 1 M 2 } extracted results are displayed in Figure 3. j ) ¦ w i 1 j 1 M1 M2 (20) 2 s (i. Watermarking embedding and extraction comparison A. k i * M 2  j} (18) IV. f. Watermarking Security Test The security test is shown in Figure 2. watermarking can be extracted correctly.

... [10] A.. Optics/Photonics in Defence and Security. “Noisy optical detection of chaos-based watermarkings” in: Proceedings of SPIE Photonics North vol. In the MDCT domain. S. “Comparison of different chaotic maps with application to image watermarking”. “A secure robust digital image watermarking”. The chaotic key is introduced into the standard AVS audio bitstream to improve robustness... J.. Special issue on Proceedings of the IEEE 87. Keating. V.2001.. the watermarking intensity and capacity can be modified adaptively.. V. “A first course in chaotic dynamical systems – theory and experiment”. In: Proceedings SPIE. no. Cambridge. 1992. 025 kHz (7) Re-quantization: Quantize the audio frequency from 16-bit to 8-bit. [8] S. it is secure and easy to implement. 2003.4. M. “ Improved waveletbased watermarking through pixel-wise masking”. [3] R. MA: Perseus Books. Podichuk and E.. Tefas. 1999.22. vol.. pp. Nikolaidis. 120–129. Devaney.Pereira. Pitas. Tefas. Nikolaidis A N. Barni. In: Proceeds ICIP. A.. Jakubowski. Technology for Optical Countermeasures.Venkatesan. Keating. Solachidis.. Solachidis.. pp.In: Proceedings of IEEE international conference on acoustics. pp. S. 2004. Nikolaidis. [5] Identification and protection of multimedia information. 2004. Moreover. [4] M. vol. 2001. REFERENCES [1] C. 33-46. IEEE Signal Processing Magazine.J. After various attacks. A. “Digital watermarking: Algorithms and applications”. then down-sample to 11. pp. Mooney and JG.. IEEE Trans Image Process. pp.567–573. It can basically meet the application requirements for AVS DRM and copyright protection.. 17. V.. speech and signal processing. [7] RL..509–512. Piva. we can see that. “Markov chaotic sequences for correlation based watermarking schemes”. Solitons & Fractals vol.. Mooney amd JG. and I. Nikolaidis A and I. 59 . F.. and I.. [11] A. Pitas.783–91. [9] A. “The impact of the theoretical properties of the logistic function on the generation of optically detectable watermarkings”. Electronic imaging: processing. 1989–1992. Delp. and A. A. Tsekeridou.. [6] A.5579. 1998..14. the algorithm can resist some attacks and maintain robustness. The experiment shows that the proposed algorithm and scheme can resist some attacks while retain good quality.05 kHz. 2001. Pitas. pp. Tsekeridou. “Bernoulli shift generated watermarkings: theoretic investigation”. In: Proceedings of IEEE international symposium on circuits and systems. 2001.. printing and publishing in colour. extraction and detection for DRM is realized based on chaotic map in the MDCT domain.. 2000. Choas. This scheme can be applied to the AVS DRM for copyright protection. N. 10. the extracted watermarkings are described in Figure 4. Nikolaidis. “Image watermarking with better resilience”. CONCLUSIONS [2] In this paper..341–350. pp. and then quantize it back to 16-bit. AVS audio watermarking embedding..L. Bartolini. Ruanaidh.

video surveillance. on the basis of Long Ye’s idea [12]. while walking. devoted themselves into this domain and presented many identifying methods. identification and action understanding in the sequences of video images. Human silhouette of 2D can be extracted directly. tracking. Liang Wang [10] and so on. step length and stature accord with the proportion.163. Oliver [8]. There are three main methods [2] of action identification. It concentrates on the characters of video surveillance. the centroid is the reference point and described the physical parameters. probability-network and syntactic technique. On this basis. and results in walking functions by extracting and dealing with the data from the experiments. With the characters of video surveillance [11]. more experiments are required to validate all coefficients in functions. Shutler [4]. R. The Relationship between Human Silhouette and Time As video information represents 3D in the form of 2D.cn Zhijing Liu School of Computer Science and Technology Xidian University Xi’an. the states of body motion are determined by the variations of rectangles’ areas. P. and the variations of body motion are detected in consequence. For short. including banks. The rest of the paper is organized as follows.edu. creeping etc. it is an important research domain to identify body movement without surveillance in long distance. 1. such as Murase and Sakai [3]. and three functions between body biometrics and time. which reflect the state of motion. the centroid of body is an important reference point of mechanism motion in study. Because of the demands for sensitive scenarios. running. human silhouette and time. Section 4 discusses the experimental results. Conclusive remarks are addressed in the final section. jumping. In physics. A. The Relationship between the Proportion of Step Length to Human Stature and Time First of all. such as orientation. Next.7]. consequently. so that the goal of detection in body motion is achieved. the coordinates of human centroid and time. there are many factors existing in environmental restriction in application of identifying methods based-on gait. two walking videos of 90° and 45° are selected. China liuzhijing@vip. Kitani [9]. and induce final expressions of gait function and modeling. so that it does not meet the applicable demands of video surveillance. III. the step length is certain to be restricted by human stature. airports. In the study of body motions.12 60 . customs and so on. which are the angles between recording orientation and body motion one. Its definition is a system of particles made of N ones. for example. R. whose mass center is the centroid of system. such as walking. based-on template. we utilize the consecutive characters of video information with the physical law. Johnson and Bobick [5]. MODELING AND FUNCTIONING I. which are the proportion of step length to human stature and time. The Relationship between the Coordinates of Human Centroid and Time Body motion in video information is equal to mechanism motion of a body in physics. P. motion-function.2009. B. with the examples of walking videos. THEORETIC MODEL The physical characters of human and motion are utilized in this model. it is difficult to extract human model of 3D directly. Then body motions can be detected in term of these factors. velocity etc. Section 5. On the basis of these. Keywords-computer vision. because of the restriction on physical factors.2009 International Conference on Computer Engineering and Technology Walking Modeling Based on Motion Functions Hao Zhang School of Computer Science and Technology Xidian University Xi’an. Section 3 describes modeling and functions in motion information. As shown in Fig. China zhanghao@xidian. are constructed. It is universally that body motion is periodic. biometrics II. Brand [6.00 © 2009 IEEE DOI 10. Then we analyze the data of body motions and induce and validate the functions furthermore. Many scholars.1109/ICCET. and it is universal to body motion. Section 2 introduces the theoretic model on biometrics. INTRODUCTION The vision analysis [1] of body motion in computer vision consists of detection. and present a method of motion-function to analyze body motion. human’s arms and legs move periodically. C.com Abstract—A modeling method of motion-function is presented to represent the motion in detection of video images within the physical and motional characters of body. Videlicet. 978-0-7695-3521-0/09 $25. motion recognition. As a result. they are labeled as 90° video and 45° one respectively.

and the expressions were induced as follows: The fitting curve of 90°video: where tn is time of video. 2 and 3. the proportions (g) of areas of rectangles between adjacent frames in video images are found by Eq.6 and 0. the expressions of the proportions of step length to stature in videos of 90° and 45° are induced.4)) + 1.325)) ⋅ exp(0. g=0. 1. A.7 2 (2) The fitting curve of 45°video: f = 0.1 ≤ i ≤ tn . The proportions (f) of step length to stature are calculated by Eq.7 [13].38 0. as shown in Eq.68 61 . it concludes that the proportion of step length and stature is between 0. The discrete data are represented and fitted with time (t). Therefore.some experiments are implemented. due to the restrictions of recorders and algorithm. as shown in Fig. the exact silhouette of human is not extracted directly and immediately. 3.1 (7) 0.25 ⋅ sin( ⋅ (t − 0.68 (b) 45°video Figure 1. There are several steps as follows: detecting body motion. width (w) and center coordinates (x. and then motion cycles are analyzed in order to induce the expressions of walking function. g (i ) = si +1 . as shown in Fig. The Relationship between Human Silhouette and Time In video surveillance. s(i ) = l (i ) ⋅ h(i ). the areas of rectangles are calculated.5 ⋅ t ) + 0.628 ⋅ x) + 1. by means of statistical data of videos.04 ⋅ sin( 2π ⋅ (t − 0. as shown in Fig. finding the data of length (l). The data are fitted by means of the proportion of areas of rectangles to time. extracting human silhouette with background subtraction.78)) − 0. 1. labeling with rectangle. Figure 2.6.5 and 0. as shown in Fig. By formula. On the basis of data extracted. 5. y) of the rectangle. the areas (s) of rectangles on human silhouette are computed by Eq.1 ≤ i ≤ tn . 2. the proportion of walking racer’s step length to stature in competition is between 0.02 0.06)) + 0. and are utilized to represent equivalently.17 ⋅ exp(−0.7 (6) The fitting curve of 45°video: 2π g = 0. l (i ) (1) where tn is time of video. As a result. The data and fitting curves on the proportion of step length to stature The fitting curve of 90°video: f = 0.15 ⋅ sin( (a) 90°video 2π π ⋅ (t − − 0. The walking videos of 90°and 45° B.1 ≤ i ≤ tn − 1, si (4) (5) f (i ) = w(i ) . 4. 3.36 (3) 0. The Relationship between the Proportion of Step Length to Stature and Time By physical characters of human. 35 ⋅ si n( 2π ⋅ (t − 1.

Consequently. human centroid is used to describe physical parameters of body motion.354 The fitting curve of 45°video: (9) d = 74. Based on the laws of mechanism motion. CONCLUSION d = ( x − x0 ) 2 + ( y − y0 ) 2 (8) where (x0. In video surveillance.224 TABLE I. as shown in Table 1. These data are fitted by the characters of video surveillance to find the expressions of walking function. the functions of human walking are studied at large. the relative errors of main parameters of walking functions. other simply ways of actions. On basis of the experiments. periods and velocity are determined by variation of the relevant parameters. as shown in Eq. such as running. C. y0) is an initial pixel of video to represent the distance of body motion. the validity of expressions is proved by experimental results. the characteristic data of human walking are extracted from videos. it will be applied to video surveillance in order to detect and affirm abnormal actions. the expressions were induced. The Relationship between the Coordinates of Human Centroid and Time In mechanism motion. By means of the relationship between distance (d) and time (t). the data were fitted. 4. by means of extracting data. V. The expressions of human walking derived from the experiments are with the characters of high efficiency in computation. jumping. phase (P). so the expressions were proved to be valid. D).48 ⋅ t − 7. As a result. Meanwhile. and complex ones which consist of many simply ones. such as period (T). 4. amplitude (A). 9 and 10.Figure 3. it is not easy to calculate the centroid by extracting data. as its consequence. EXPERIMENTAL RESULTS The experimental results above conclude three arrays of expressions on walking function in 90° and 45° videos. In addition. The data are extracted. remain to be studied and presented further.38 ⋅ t + 1. because of the recording angles and so on. Video Type f (10) The identification of body motion is one of main and latest orientation on study. were less than 10%. In contrast. as shown in Fig. The expressions and results above prove that human actions. With the experiments on testing set. and offset (B. a method [14] to calculate the centroid is presented and applied to the representation of silhouette characters [10]. Euler distance is used as follow: IV. The data and fitting curves on the distance of adjacent rectangle centers C. The fitting curve of 90°video: d = 77. the center of rectangle is used to replace the centroids equivalently. THE EXPRESSIONS OF WALKING FUNCTIONS The Fitting Curves of 45°Video The Fitting Curves of 90°Video g d 2π ⋅ (t − P )) + B1 1 T 2π ⋅ (t − P2 )) + B2 g=A2 ⋅ si n( T d = A3 ⋅ t + B3 f = A1 ⋅ sin( f = A1' ⋅ sin( ' g = A2 ⋅ sin( 2π ⋅ (t − P ' )) ⋅ exp( B1' ⋅ t ) + C1' 1 ' T 2π ' ' ' ⋅ (t − P2' )) + B2 ⋅ exp(C2 ⋅ x) + D2 ' T d = A3' ⋅ t + B3' 62 . angles. The data and fitting curve on the proportion of the rectangle areas for the adjacent frames Figure 4. Consequently. as shown in Fig.

1997. 994-999. Sept. 3. vol. FU Jie. Jan. vol. 2007. no. Gait-Based Human Identification. 225-237. pp. DU You-tian. The authors would also like to thank the anonymous reviewers for their valuable comments that resulted in an improved manuscript. 239-246. XU Wen-li. In Proc International Conference on Audioand Video-based Biometric Person Authentication. 63 . WANG Liang. pp. 2004. Deleted interpolation using a hierarchical Bayesian grammar network for recognizing human activity. Statistical gait recognition via temporal moments. 2005. YUAN Zhi-run. 3. Tieniu Tan. Moving object recognition in eigenspace representation: gait analysis and lip reading. Aug. Pattern Recognit Lett. TAN Tie-Niu. vol. no. 2004. Gait Tracking Based on Motion Functional Modeling. A Survey of Visual Analysis of Human Motion. Weiming Hu. Kitani K M. pp. Journal of Guangzhou Institute of Physical Education. pp. In Proc IEEE workshop on VS PETS. Sato Y. REFERENCES [1] WANG Liang. no. 34. Pentland A. Research on the Locating of Image Centroid of Human Body Movements——Locating the Human Body Centroid. 94-99. Sugimoto A. pp. Bayesian computer vision system for modeling human interactions. Journal of Communication University of China (Science and Technology). In Proc IEEE Southwest Symposium on Image Analysis and Interpretation. no. 1997. pp. FAN Yi-fang. TAN Tie-Niu. A Survey on the Vision-Based Human Motion Recognition. 84-90. Johnson A. Understanding manipulation in video. vol. vol. pp. 2000. Chinese Journal of Computers. no. Pentland A P. 1996. HU Wei-Ming. Oliver N M. 8. 2001. pp. 334-352. ZOU Liang-chou. Mar. 13-16. 17. 2006D90704017). ZHANG Qin. Harris C. Beijing Sport University Press. In Proc Int Conf Autom Face Gest Recogn. YAO Guo-Qiang. 2003. [6] [7] [8] [9] [10] [2] [11] [3] [12] [4] [13] [14] [5] Brand M. Brand M. 26. Chinese Journal of Electronics. Sakai R. IEEE Trans Pattern Anal Mach Intell. vol. pp. Steve Maybank. XU Shu-Kui. Beijing. vol. 301311. 25. Li Yong-bin. 24. Liang Wang. 35. XIONG Xi-Bei. 14. Oliver N. 1996. no. WANG Hui. Bobick A. IEEE Trans Syst Man Cybern Pt C Appl Rev. 1. pp. Nixon M. 353360. no. 2002. Coupled hidden markov models for complex action recognition. Chinese Journal of Computers. No. CHEN Feng. 2. A Survey on Visual Surveillance of Object Motion and Behaviors. LIU Jian-bo. 3.ACKNOWLEDGMENT The research was supported in part by the Ministry of Education of the People’s Republic of China and Research Project (Guangdong Province. 291-295. Rosario B. Basic Text of Track and Field. 3. 22. Shutler J. HU Wei-Ming. Murase H. vol. pp. 3. Aug. 35-37. 831 – 843. 155-162. 2000. Mar. pp. A multi-view method for gait recognition using static body parameters. no. 2007. pp. YE Long. Feb. In Proc IEEE Comput Soc Conf Comput Vision Pattern Recognit.

edu. The reason is that the attributes of tags is not much helpful to classification task except those of “meta” and “a”. Experimental results on a Chinese web-page dataset show that methoss we designed can improve the performance from 75. INTRODUCTION With the rapid development of Internet. In Section 3. we add user-defined tags for text in a webpage.100084 huangwt@tsinghua.2009. stopword removal.88%. etc. The rule is: the “plain text”. the amount of web-pages has been increasing dramatically. and “applet”. “script”.72 64 1) HTML Parsing The purpose of HTML Parsing is to remove html code that is irrelevant and extract text. English lexical analysis. preprocessing. the large collections of web-pages bring people great challenge: how to find what they need.edu. the nonhyperlink text is identified by “text” tag. Qinghai. whose leaf nodes are plain text identified by “text” tag or hyperlink text identified by “anchor”. 60% of the work is data preprocessing. SYSTEM DESIGN AND IMPLEMENTATION Figure 1. people hope to classify web-pages according to their contents. and web-page classification. Following the above rule. and vocabulary selection.00 © 2009 IEEE DOI 10.1109/ICCET. we will introduce a Chinese web-page classification system we implemented. In order to organize and utilize information effectively. We save the parsing result as files of XML format for further preprocessing and feature extracting. remove html source code embedded in tags such as “style”. and some methods on Chinese web-page preprocessing and feature preparation are proposed. the experimental results on a Chinese web-page dataset are shown as well as some discussions. A Chinese Web-page Classification System A. The procedure of HTML Parsing is as follows: Firstly. Webpage classification has been a hotspot in the research domain of text mining and web mining. stemming. In this paper.China. Fujian. Web-page classification is widely applied in the field of vertical search.82% to 81.810016 lyanmin@qhu. personalized search. . Reserve the attributes of “meta” and “a” and discard attributes of other tags. There are six procedures in web-page preprocessing: HTML parsing. While providing all-embracing information. while the hyperlink text is identified by “anchor” tag. Finally. 978-0-7695-3521-0/09 $25. a HTML web-page can be represented as a HTML tag tree. The system design and our proposed methods will be described in section 2.cn Abstract—A detailed design and implementation of a Chinese web-page classification system is described in this paper. Secondly.cn Yanmin Liu Department of Computer Technology and Applications Qinghai University Xining. Fuqing Brunch Fuqing. i.China xlx@fjnu. I. System Architecture There are three parts in our Chinese web-page classification system: web-page preprocessing. Web-page Preprocessing In the whole data mining task.2009 International Conference on Computer Engineering and Technology Preprocessing and Feature Preparation in Chinese Web Page Classification Weitong Huang Department of Computer Science and Technology Tsinghua University Beijing. KeywordsText classification.edu. because no special tags are prepared for text in standard HTML. B. The system architecture is illustrated in Figure 1.e. feature preparation. we conclude our work in Section 4.cn Luxiong Xu Fujian Normal University. II. Feature preparation Chinese web-page Chinese word segmentation.

wdi . Through experiments (see section 3. In this paper.k | c j ) k =1 P (c j ) indicates the document frequency of class c j relative to all other classes. We use the revised formula below to compute the document frequency of word wt . we adopt a more simple and effective method to remove noises.Usually the page layout is controlled by “table” tag. Feature Selection One difficulty in text classification is the high dimension of feature space. 2) English Lexical Analysis and Chinese Word Segmentation In English lexical analysis. adjectives. the hyperlink text is usually used for navigation or advertisement. One is plain text which is content related. nouns and verbs express more semantic meaning. Our experiments are in section 3. while the tables along the two sides. P( wt | c j ) = | V | + ∑ s =1 ∑ d ∈D N ( ws . while there are no separators between words in Chinese text. DF has comparable performance with IG and CHI. 3) Stopword Removal and Stemming Words with high frequency are not only helpless to distinguish documents but also increase the dimension of feature space. C. We adopt document frequency selection in our system. P ( wt | c j ) indicates the frequency that the classifier expects word documents in class document c j . mutual information (MI). there are two open source Chinese word segmentation projects. The content text in the 65 freq( wt ) = Ci ∈C ∑ freq ( wt . Classifier 1) Naïve Bayes Classifier The Naïve Bayes Classifier is a simple but effective text classification algorithm.2). Given a document and a classifier. Because of ICTCLAS’s higher precision. using an approach based on multi-layer HMM. Once there is a HTML tag tree described above. and conjunctions. Part-OfSpeech tagging and unknown words recognition. Such words are called stopword. which is used for stopword removal. spaces are usually used for separators. remove noises and improve classification quality. di ) P(c j | di ) i . articles. words having the same stem can be considered as to be equal. E. The task of “feature selection” is to remove less important words from the feature space. Assigning different weight to different parts can improve the quality of classification. [3][4] compared the above four methods. information gain (IG). It performs very well in practice with high accuracy. 4) Vocabulary Selection A sentence in natural language is made up of words of different parts of speech. and it has the advantage of simplicity and low computation complexity. We use ICTCLAS[6] to extract nouns and verbs from a sentence. and the latter is the actual content text that we can see visually. Feature Extraction and Weighting HTML web-pages are semi-structured text. verbs. many words have variations. One is ChineseSegmenter [5] which works with a version of the maximal matching algorithm.2. The result shows that IG and CHI gain the best performance.3. for example. Institute of Computing Technology in China developed a Chinese lexical analysis system ICTCLAS [6] <body> part includes two types of text. and the experiments are in section 3. This simple algorithm is surprisingly effective. So it is a feasible way to choose only nouns and verbs as feature words in order to reduce feature space. MI is the worst. In English. document frequency (DF). Using stemming [7]. D. we adopt it as the Chinese word segmentation module in our classification system. and x'-test (CHI). The assumption of this method is: the features (words) with low document frequency have small influence on the classification accuracy. and the other is hyperlink text. So it is effective to remove noises to only discard hyperlink text. some algorithms [1] [2] can be applied to remove noises and extract content text. Among them. Ci ) is the document frequency of wt in class Ci . di ) P(c j | di ) |V | i 1 + ∑ d ∈D N ( wt . We maintain a stopword list containing 580 English words and 637 Chinese words. The former includes “meta” and “title” which are the summary description of the whole page. adverbs. As we know. Many feature selection methods have been studied.k wt to occur in denotes the k th word in di . A web-page consists of two parts: <head> part and <body> part. ICTCLAS includes word segmentation. Therefore. pronouns. Ci ) num(Ci ) freq ( wt . we determine the probability P (c j | d i ) that the document belongs in class assumption: c j by Bayes’ rule and the naïve Bayes |d i | P(c j | di ) ∝ P(c j ) P(di | c j ) ∝ P(c j )∏ P( wdi . prepositions. When looking for words. it attempts to match the longest word possible. such as nouns. we discover that in a content page (not a hub page). the top and the bottom are often noises such as navigation bar and ads. Chinese lexical analysis is a prerequisite to Chinese information processing. num(Ci ) is the number of documents in class Ci .

000 feature words by document frequency method.1} P (c j ) = 1 + ∑ d ∈D P(c j | di ) i |C |+| D| | C | is the number of classes.sohu.e. Vocabulary Selection B. EXPERIMENTS Chinese Web-page Dataset We download 6745 Chinese web-pages from http://www. which are always correlated with the topic of the web-pages and contain few noises. III. TABLE I. 2) Evaluation Measure We use the standard measures to evaluate the performance of our classification system. Figure 2 shows the experimental results. we let the ratio of the header weight to the body text weight to be 5:1. P (c j | d i ) ∈ {0. Properly raising the header weight in the feature space can improve the classification quality. When the amount of feature words varies from 1. and Micro-F1 rises up as the header weight is raised until gets to a maximum when the ratio of the header weight to the body text weight is 5:1. | D | is the number of training documents. but that is not the real world. i.com/ as our training set. Feature Extraction and Weight Distribution Figure 4. Figure 2 66 . recall and F1-measure[8].2. We decide to use the test set of a different source to evaluate the effect of training.11%. These web-pages are distributed among 8 categories. Section 2. Classification Results of Different Header-Weighting Schemes Category Name Auto Business Entertainment Health House IT Sports Women Sum Training Set (from sohu) 841 630 1254 784 736 1050 841 609 6745 Test Set (from sina) 351 263 523 327 307 438 351 254 2814 The header of a web-page.000 to 10. while plain-text-scheme gets to 78. precision. Our former work show that the performance on the dataset from only one data source is surprisingly high (usually above 95%).75% when header and body text share the same weight. C. CHINESE WEB-PAGE DATASET Figure 3. the nonhyperlink text in web-pages. D is the set of training documents.sina. N ( wt .com. Classification Results using vocabularies of different parts of speech Figure 2. illustrated in figure 3.cn/ as our test set. d i ) is the count of the number of times word wt occurs in document di . by effectively removing the noises in the web-pages. the average Micro-F1 of full-text-scheme is 75. Table 1 shows the amount of documents in both training and test set in each category. and 2814 webpages from http://www. and compare it with the one extracting the full text. shows Micro-F1 is 79.000.V is the vocabulary. The results shows that the scheme of extracting only plain text as features can improve the classification quality by 3.82%. A. which is concise and exact. We extract only the plain text in every web-page as a feature extraction scheme. reflects how the page author highly generalized the content on the web-page.93%. Therefore. i.1 has defined plain text. Classification Results of Different Feature Extraction Schemes The results. We have done an experiment to get the ratio of the header weight to the body text weight: Choose 4. We have excuses to choose training set and test set from two different sources. and compare the different classification quality when the ratio of the header weight to the body text weight varies from 2:1 to 10:1.e.

a series of web-page preprocessing and feature preparation methods are proposed.org/martin/PorterStemmer/ C. A Statistical Approach for Content Extraction from Web Page. showed in figure 2. Tennessee. We do the research on different candidate vocabularies including all words. adjectives. if we extract the full text and don’t use special feature preparation methods.69% Recall 83.18% 92. TABLE II. London. select 4.88% 44. 2004.56% 70. The results are illustrated in figure 4.39% 84. Butterworth. Micro-F1 is 81.mandarintools. In Proceedings of WWW’04. A Comparative Study on Several Typical Feature Selection Methods for Chinese Web Page Categorization. the proposed methods improve the measurement Micro-F1 greatly.41% 69. Final Experimental Results Considering the experimental results comparisons and analysis mentioned above. CONCLUSION In this paper. adverbs. IV. and Jan O.000 feature words using document frequency method.cn/project/project. In comparison. Micro-F1 is only 75. Pedersen. 173~176 67 .90% 87. because many cases cannot be assigned to only one category absolutely. raise the ratio of the header weight to the body text weight to 5:1. The classification result of using only nouns and verbs is clearly better than that of using all words. 1997 Songwei Shan.11% 82. On our Chinese web-page dataset.html ICTCLAS: http://www. 2004 Chengjie Sun.00% thanks to increasing the header weight. 39(22): 146~148.org. Through experiments. but this is the real world. we will enrich our Chinese web-page dataset and do experiments on larger and more datasets. Shicong Feng.54% 97.88%.69% 87. The figure shows the average Micro-F1 goes up from 78. we give the following conclusions: extracting only “plain text” can eliminate noises in web-pages effectively.93% to 82. both raising the header weight and choosing only nouns and verbs as candidate features can improve the classification quality.05% 95.72% 85.tartarus. web-pages about women’s health information can be classified into the category “Health” as well as the category “Women”. Using URLs and Table Layout for Web Classification Tasks. USA. Yi Guan. CLASSIFICATION RESULTS OF EACH CATEGORY Precision 87. In the future. and consequently improve the classification quality. and use Naïve Bayes classifier to train and test the Chinese webpage dataset. Yiming Yang. which means nouns and verbs are enough to reflect the content of a web-page and can eliminate the noises caused by pronouns. A Comparative Study on Feature Selection in Text Categorization.82%. we set down a final web-pages classification scheme: extract the plain text in web-pages. quantifiers. Karger. 2003. New York. Journal of Chinese Information Processing.88% 81. compared to full-text method. Chinese Segmenter: [2] [3] Category Name Auto Business Entertainment Health House IT Sports Women Micro-F1 [4] [5] [6] [7] [8] http://www. Information Retrieval.shows the experimental results of feature extraction after raising the header weight. In Proceedings of ICML. from 75. New York. The precision and recall of each category is illustrated in Table 2. Nashville.J. Computer Engineering and Applications.19% 76. choose nouns and verbs as features.nlp.88% when we use the preprocessing and feature preparation methods mentioned above. 1979.89% 64. 18(5): 17~22.com/segmenter. nouns and verbs.08% 90. USA.82% to 81.php?proj_id=6 Porter Stemming Algorithm: http://www. and Xiaoming Li. D.van Rijsbergen. REFERENCES [1] Lawrence Kai Shih and David R.88% We are satisfied with our improvement. For example.

Load sharing. Authenticated Distribution Managers (ADM).2009 International Conference on Computer Engineering and Technology HIGH PERFORMANCE GRID COMPUTING AND SECURITY THROUGH LOAD BALANCING V. balancing. Thus high performance is assured. job division. The new technique that we discuss later on. and try to throw light on the tremendous potential that this technique possesses for providing high performance output in the distributed network. and emphasize on its advantage of high resource utilization. which allows faster implementation of 978-0-7695-3521-0/09 $25. Chennai.v@gmail. elaborates on the basic concepts of grid computing and load balancing and their advantages. Introduction The topic chosen here for discussion.77 68 .Joseph’s College Of Engineering. security concerns arise. Thus the security is not compromised even if there is load distribution across various systems.Prasanna Venkatesh Final year Department Of Information Technology St. India E-mail: sugavanan. There by creating an environment in which no system is idle when even two systems are communicating.com Abstract It has always been a great deal to balance both security and performance. it detects a new opening in the security implementations of load balancing by distributing them through a grid system. The section following the discussion about grid computing explains about the advanced methods of load balancing.1109/ICCET.Sugavanan Final year Department Of Information Technology St. High Performance. When we look onto the other side of the coin of such networked systems. but involves the combination of the advantages of both of these concepts. Security Authentication. The discussion proceeds by explaining about a grid system and discusses the load sharing ideas of the grid. Keywords: Security. ADM does the work of job division and integration based on the security implications of its Authenticator.1310@gmail. which provides various security implementations and protocols that provide highly secure operations. that is. India E-mail: vprasanna.2009. and advocates the development of a new technique. Any system can select any other system as an ADM and transfer its security policies to it. a technique which doesn’t try to eliminate the short comings of either of the concepts.00 © 2009 IEEE DOI 10. Thus we have presented an overview of a high performance network without security being compromised. But here we have presented a scenario in which a local scheduler of a networked system can schedule its jobs on and across other systems in the network securely. This clubs both distributed computing and load sharing. along with enhancement in resource utilization and optimization. combines these two advantages.Joseph’s College Of Engineering Chennai. These issues are handled by employing new level of managerial systems called Authenticated Distribution Managers (ADMs) for managing the cooperating systems.com V.

This is a type of parallel computing technology in which the computers involved. Thus. The nodes involved in the process are voluntary. Hence. Grid computing: A grid is defined as a group of nodes that are connected to each other through a number of connecting paths for communication. it utilizes its available resources to provide enhanced performance. timing and synchronization aspects. that is. and the interaction and mediation between the nodes of the grid are performed through a management interface. authentication and other protocol implementation JAVA SERVICE C SERVICE PYTHON SERVICE. called “nodes”. JAVA CLIENT C CLIENT PYTHON CLIENT. Advantage: A grid computing system as discussed above provides sharing of a task that a single node is unable to perform.etc Load Balancing: Load balancing is the process in which the entire workload on a system can be distributed among the nodes of a network so that resource utilization and time management can be enhanced in the network. are almost completely independent of each other in terms of resources. high resource utilization and high-speed systems. For this reason. All transactions in the grid are performed through the management interface. Grid computing allows a cluster of loosely coupled computers to perform large tasks or tasks that generally consume more resources and time than is feasible for a single system. Disadvantage: Grid computing is a new concept and the implications in this field have not been explored as much as the other areas of distributed systems.these security measures and in an efficient manner. 69 . this is the main feature of grid computing. These nodes are more heterogeneous than those present in other distributed systems. we try to implement security in grid computing by using the advanced security provided by load balancing systems in a grid system for producing high performance. we look for systems that have well established security implementations that can be performed faster through a grid system. in our new technique.etc MANAGEMENT INTERFACE Security. high performance through enhanced resource utilization. The obvious drawback of such kinds of new systems would be the security implementations.

parallelism. and to find effective and efficient methods of implementation of security aspects required for the distributed environment to provide reliability to the entire network. Security through firewalls. Protection against DDoS (Distributed Denial of Service) and 3. It is necessary that each of the client systems are authenticated nodes of the network so that data transfer can be performed. from the above discussion. Approach: To attain this goal. we primarily explore the inherent parallelism principles of most complex problems. 2. The server that performs this task is called the “Load Balancer”. It is thus unfeasible to implement the security of a load balancing system to each and every node of the grid system every time two or more nodes are communicating. namely. where in. through the prism of their immanent natural distributed parallelism. higher performance by providing faster outputs. Authentication: Authentication is required in load balancing when the load is distributed to many nodes in the network. The approach discussed below provides a simple. Redundancy is required also for authentication services and in the implementation of the basic feature of distributed systems.This technique is very useful in providing optimization of the resources of the network. is the highly advanced security implementations through various security and authentication protocols. This is performed by using the concept of 70 . The main features of a load balancer are: Security: The load balancer provides security of data that needs to be shared between systems at the time of load balancing. These security aspects involve the implementation of available security protocols like: The SSL (Secure Socket Layer) protocol for TLS(Transport Layer Security) . which is indispensable in any network that implements high security. Advantage: The notable feature of the load balancing concept. We now implement the usage of the security aspects of a load balancing system. 1. This feature can as well be combined with the grid concept to enhance its security implementations. namely. by integrating them in a set of loosely coupled system in a grid. Disadvantage: The drawback of this concept lies in the fact that implementation of so many security features on each and every node in a grid system results in enormous time consumption. the server manages the distribution of the workload among the nodes of the network. Reliability: Load balancing techniques provide reliability through redundancy. Thus. This will ultimately cause the main feature of grid systems to suffer attrition. feasible solution to the above situation and aims at solving it by combining the above two concepts into a single system. authentication of the client systems is an important aspect of the discussion.

Once the primary node obtains its ADM. This node. by ADM. The ADM can be chosen from a set of idle systems present in the grid. In this way. the load of security implementation for one node in a grid can be balanced between all the idle nodes present in the grid. since every node is authenticated. Idle nodes in the grid Role of the ADM: In a grid system. the load of security implementation for the Primary node. which may or may not be part of the grid. where. High security to grid systems. it is possible to distribute the security aspects of one node in the system by implementing them through other nodes in the grid. 2. Hence. when a node is in communication with a system. it needs to implement its security standards and protocols. in a grid system. 71 . Communicating system Primary node ADM Distribution of security implementation of primary node. High performance to grid systems through faster security implementations. A primary node designates the Authenticated Distribution Manager. every node has access to the grid only after authentication have been performed by the respective management agent. is called the “ Primary node ”. thus reducing the time requirements for the security implementation and also providing optimization of resources in the grid. The ADM then identifies other idle nodes in the grid and distributes to each of these nodes. 3. it hands over its security systems and protocol implementations to the ADM. Such a system in the grid is called the Authenticated Distribution Manager (ADM). Improved efficiency in grid computing. Advantages: The advantages of this new technique are listed as follows: 1.distribution of security implementation throughout a grid. which requires security implementation. whose states are monitored by a specially designated system in the grid.

The idea discussed above allows integration of the two major futuristic and fast developing techniques used in distributed environments. 72 .N. Optimal static load balancing in distributed computer systems. is available for research and further areas are to be explored in future. This in turn allows resource optimization. that will pretend to be always idle in a grid. and also implement the security of the grid system. 5. namely. Constraints may occur in situations where none of the nodes in the grid system might be idle. 2. in the Journal of ACM. Rajkumar Buyya. Any further possibilities in the abovediscussed technique. Asser. GRIDS-TR-2004-5. that not only integrates their advantages. University of Melbourne. thereby reducing the time requirement for a task in the grid.4. it might be necessary to maintain at least one node called the “master node”. Volume 32. The possibility of certain drawbacks in the new approach discussed can be in the implementation of distribution algorithms in the ADM. Global Grids and Software Toolkits. Deepa Nayar. Technical Report. Chun Ling Kei.Tantawi. Australia. using load balancing between the idle nodes of the grid. and Srikumar Venugopal. Utilizing resources of a grid system for security implementation. we enable the implementation of high performance in the grid system by balancing the load of security implementation. pages 445-465. grid computing and load balancing. July 1. In such situations. Conclusion and future vision: Thus. Improved resource optimization by utilizing idle nodes in a grid. Grid Computing and Distributed Systems Laboratory. and perform only the functions of the ADM. that perform nullification of each others’ drawbacks by implementing one technique within another’s domain. but also provides tremendous scope for development of security as well as performance of any distributed system. Parvin Asadzadeh. 2004. issue 2. Key References : Scope for future research: 1. Don Towsley.

W 2 1 are input signal amplifiers. we faced with the demand of force control accompanied by position control. two symmetry hydraulic cylinders.1 shows a typical synthesis control system. D 0 is the given cylinder rod position. Literature [1] discusses all kinds of force application system with influence in detail. it is a typical force application system disturbed by position interference signal. introduced the linear quadratic performance fonctionelle. W 13 .1. The equivalent input of position interference signal to system is zero.00 © 2009 IEEE DOI 10. but few works have been done in electro-hydraulic servo system.2009 International Conference on Computer Engineering and Technology Research of The Synthesis Control of Force and Position in Electro-Hydraulic Servo System Yadong Meng. The simulation results proved that the compensated statespace model has more superior performance.1109/ICCET. and two pressure sensors. which contained two proportion valves. the driving force of the left side force application control system is worked out from pressure difference multiplied by efficient area. the right side subsystem is position control system. W 1 4 is coefficient of force-electro transform. Hao Yan. The time-position curve of the right side cylinder rod is given. The value measured by force sensor is the elastic force of spring. The force feedback signal influence on position control. the force control and the position control in one system need to be run simultaneously. the position interference signal is a random signal uncontrolled. Analyzed a typical system which adopted this kind of control algorithm , Applying unchanged structure theory. Considering the synthesis control should meet the requirement to force and position at the same time . D 2 are practical position of the two cylinder rods respectively. an elastic load between two cylinder rods.cn Abstract The synthesis control of force and position in electro-hydraulic servo system has been applied to some special occasions. 2. The left side subsystem is force application control system.2009. and there is no relation between two given curves. an elastic load. control force application accurately. Beijing Jiaotong University. In some existing electro-hydraulic servo systems. W 2 2 are transfer functions of two amplifiers respectively. the optimized control formula of the synthesis control of force and position on total state feedback has been deduced . China 06116282@bjtu. the compensated state-space model is presented.edu. but we considered it as a special case. Introduction The electro-hydraulic servo system aim at the force control or the position control independently is familiar. Changchun Li. but their performance (given curves) should be met together. a position sensor. This kind of control task has been widely studied in robotic field [6. but the position control is the final task of system. System description Fig. For example. Beijing. In such a system. Xiaodong Liu School of Mechanical. they always disturb each other. the time-force application curve of the left side cylinder rod is given simultaneously. the compensated transfer function has been deduced. W 2 4 is coefficient of position-electro 73 1. W 1 2 . D 1 . The purpose of each control algorithm applied on this system is to eliminate the position interference. W 1 1 .35 . Electronic and Control Engineering.7]. so. the force control is auxiliary of the position control. 100044. or to keep the force unchanged [5] . We can 978-0-7695-3521-0/09 $25. The system parameters as follows: F 0 is the given driving force. W 23 are transfer functions of two proportion valves respectively. System description and modeling 2. In some special occasions. F1 is the practical driving force. We regulate system according the force feedback value to ensure position control[2-4]. The other one system is position control system with force feedback. The load simulator used commonly in aerospace field.

m 1 . K s is the elasticity coefficient of spring. The practical position of the right side cylinder D 2 . I 2 are electric current to two proportion valves respectively. I 1 . It have two input and single output . The single output signal is: The practical driving force of the left side cylinder. According to equations (1)(3)(5)(7)(9) . It have two input and single output . m 2 are total mass of two cylinder rods respectively.8]. The single output signal is: The practical position of right side cylinder rod D 2 . C 1 t .transform. K 2 B are flow coefficient of two proportion valves respectively. We can deduce the transfer function relative to input signal D 0 and Fs . K 2 p are flow-pressure coefficient of two proportion valves respectively. two output signals: practical driving force and practical position. We can deduce the transfer function relative to input signal F0 and D 2 . V 2 are overall volume of pipeline and cylinder of two cylinders respectively. B1V . output signal F1 . The load flow equations of the two proportion valves as follows: (1) Q 1 L = K 1 B B 1V − K 1 p P1 L (2) Q2 L = K 2 B B2V − K 2 p P2 L Where Q1 L . K 1 p . It has two input signals: the given driving force and the given position. K 1 B . A 2 are efficient area of two cylinders respectively. The two input signal are : The given cylinder rod position of the right side system D 0 . This two coefficients is set to zero while deducing system open circuit transfer function . C 2 t are total leakage coefficient of two cylinders respectively. We need to study system open circuit transfer function at first . The transfer function equations (12)(13) do not contain the feedback elements. The system transfer function The coefficient of feedback W14 and W24 are relative to the operation of system feedback signal . V 1 . The two input signals are : The given driving force of the left side cylinder F0 . According to equations (2)(4)(6)(8)(10) . The system fundamental equations This is a typical synthesis control system. Q 2 L are load flow of two cylinders respectively . For the right side position control subsystem . D2 (s ) = ⎛ V2 ⎞ (13) s + K 2 t ⎟ Fs ⎝ 4βe ⎠ ⎛K V ⎞ m 2V 2 3 s + m 2 K 2 t s 2 + ⎜ s 2 + A22 ⎟ s + K 2 t K s 4βe ⎝ 4βe ⎠ K 2 t = K 2 p + C 2 t is the total flow-pressure Where ( A2 K 2 BW 23W 22W 21 ) D 0 − ⎜ (4) The movement equations of two cylinder rods: (5) m1 s 2 D1 = F1 + K s ( D 2 − D1 ) m2 s 2 D2 = A2 P2 L + Fs (6) The equations relative to feedback and gain: coefficient of right side position control subsystem. According to the basic working theory of electro-hydraulic servo system[1. P1 L . F s is the elastic force of elastic load. For the left side force application subsystem . F1 ( s ) = ⎛ m1 2 ⎞ s + 1 ⎟ W 1 2W 1 3 K 1 B A1W 11 ⋅ F0 − ⎜ ⎝ Ks ⎠ ⎛ V m 1V1 3 m 1 A2 s + K 1t s 2 + ⎜ 1 + 1 4βeK s Ks ⎝ 4βe K s A12 s ⋅ D 2 F0 W11 W21 W22 W 12 I1 D1 I2 − W24 − W14 D2 Fig. P2 L are load pressure of the two proportion valves respectively .3. output signal D 2 . β e is the bulk elastic module of hydraulic oil. B2V are opening of the two proportion valve cores respectively . D0 B1V = W13 I 1 (7) (8) (9) (10) (11) B2V = W23 I 2 I2 = W22 ( DW21 − D2W24 ) 0 I1 = W12 ( F0W11 − F1W14 ) The elastic force of load spring: Fs = K s ( D1 − D 2 ) 2. The load flow equations of the two cylinders: V (3) Q1 L = A1 sD1 + C1t P1 L + 1 sP1 L 4βe Q2 L = A2 sD2 + C 2 t P2 L + V2 sP2 L 4βe (12) ⎞ ⎟ s + K 1t ⎠ is total flow-pressure Where K 1t = K 1 p + C 1t coefficient of left side force application subsystem.2.1 System circuit 2. The elastic force of elastic load Fs . A 1 . they are open circuit 74 .

2. 0⎤ ⎡b B = ⎢ 11 ⎥ ⎣ 0 b22 ⎦ 75 . Compensated by unchanged structure theory The literature [1] discussed unchanged structure theory in detail.2. The transfer function equations (12) (13) change to: ⎛ ⎞ ⎜ W ⋅W W K A ⎟ F1 = ⎜ 11 12 13 1 B 1 ⎟ ⋅ F0 V1 ⎜ ⎟ s + K 1t ⎜ ⎟ 4βe ⎝ ⎠ W 21 ⋅ W 22W 23 K 2 B D2 = ⋅ D0 A2 s (16) (17) 3. Applying suitable structure compensation can improve the system response speed. ⎡1 1 0 . and the structure compensation has been done to the electro-hydraulic position control system and the electro-hydraulic force application system respectively. According to equation (16). ⎡u ⎤ ⎡F ⎤ ⎡a A = ⎢ 11 ⎣0 0⎤ 1⎥ ⎦ 0⎤ 0⎥ ⎦ . u = ⎢ ⎥ = ⎢ ⎥ . Fig. eliminate strong outer interference. m1s D2 Ks s m1 s 2 + K s s m1 s 2 + K s The system structure has changed after compensation. The force application control system changed into a SISO system which input signal is F 0 and output signal is F1 . The compensated state-space model The state-space equation is the foundation of modern control theory. C=⎢ ⎣ u 2 ⎦ ⎣ D0 ⎦ ⎣0 The equation (20) is the compensated state-space model of the synthesis control of force and position control system. the state-space model is more suitable. Two structure compensation elements have been applied simultaneously. leads to the following equation: (18) x1 = a11 x1 + b11u 1 Where a 1 1 = − 4 β e K 1t 4 β eW11 ⋅ W12W13 K 1 B A1 b11 = V1 V1 − A1 W13 K1B W12 F0 D0 D1 A12 W13 B1V − 1 V1 s + K1t 4βe K1B A1 F1 W11 According to equation (17). the practical cylinder rod position D 2 = x 2 . The system control block diagram can be draw out according to the system foundation equations. the practical driving force F1 = x 1 . this model is based on two compensated elements added in system.transfer function.1. The blocks connected to dashed line in Fig. the additional compensated elements are relative to the accuracy of the compensated model. we can deduce the following state-space equations: x = Ax + Bu (20) y = Cx Where ⎡x ⎤ ⎡F ⎤ x = ⎢ 1⎥ = ⎢ 1 ⎥ ⎣ x2 ⎦ ⎣ D2 ⎦ Fig. The compensated transfer function is : A1 (14) T1 = W 13 K 1B The input signal of the compensation element in the position control system is right side cylinder load pressure P2 L . set the given driving force F 0 = u 1 . The compensated transfer function is : . in order to eliminate the cross influence between force application subsystem and position control subsystem. leads to the following equation: (19) x 2 = b22 u 2 Where b 2 2 = W 2 1W 2 2 W 2 3 K A2 2B W21 Fs 1 A2 − P2L m2 s2 A2 ⎛ V2 ⎞⎛ 1 ⎞ s + K2t ⎟⎜ ⎜ ⎟ ⎝ 4βe ⎠⎝ W23 K2 B ⎠ V2 K s + 2t 4β e A2 A2 W22 W23 B2V K 2 B A2 − 1 s D2 From the equations (18)(19). In this design. For the multi input and multi output system. Construct the compensated state-space model 3. ⎛ V ⎞⎛ 1 ⎞ T2 = ⎜ 2 s + K 2 t ⎟⎜ ⎟ ⎝ 4βe ⎠⎝ W23 K 2 B ⎠ (15) 3. The position control system changed into a SISO system which input signal is D 0 and output signal is D 2 .2 Frame chart of the system The input signal of the compensation element in the force application system is the left side cylinder rod velocity D 1 .2 are the structure compensation blocks. set the given cylinder rod position D 0 = u 2 .

the amplitude is 10mm. the amplitude is 10N. the simulation parameters as follows: the given driving force curve is a standard sinusoid.5 shows the curves of given position and practical position on base of non compensated model. The simulation of system In this design.4 shows the curves of given driving force and practical driving force on base of compensated model . the period is 2s. but the periods are different.2. g are satisfied with the follows equation: − (26) − PA − AT P + PBQ2 1 BT P − C T Q1C = 0 Fig. at same time . The real substance is to keep the lesser error. The other variables can be measured easily . The consumed power should be less as far as possible . 5. The driving force of cylinder can be worked out . make the performance fonctionelle reach minimum value . and the error of force application system disturbed by position 76 . the simulation executed on non compensated model and compensated model.4. we adopted physical model simulation. The signals of given force and given position are standard sinusoid signals.6 shows the curves of given position and practical position on base of compensated model. we can deduce that the error reduce after model compensation. the maximum is 55N. Because the synthesis control of force and position in electro-hydraulic servo system had two input control variables. Keep the other conditions no change. Compared the Fig. asked two output variables meet the requirement. Fig.3 The given force and practical force before compensation The Fig. the maximum is 5mm. and this formula is achieved easily in electrohydraulic servo system .4. we can measure the pressure in the two sides of cylinder . This optimized control formula make use of all state information of system . so to reach the optimized control of consumed power and error. make the practical driving force and the practical position accord with the given driving force and the given position . the period is 1s. Practical force (N) Given force (N) Time (s) 4. Solving the equations should introduce Riccati equation.1. (21) The initial condition of the system is: x (t0 ) = x0 (22) The quadratic optimized control problem as follows : Set the demand output is z ( t ) . the output error is (23) e (t ) = z (t ) − y (t ) The performance fonctionelle is 1 tf (24) J = ∫ ⎡ eT ( t ) Q1e ( t ) + u T ( t ) Q2 u ( t ) ⎤ dt ⎦ 2 t0 ⎣ Where Q 1 . The explanation on the synthesis control of force and position control are: In certain time segment. the minimum is 45N. Q 2 are positively definite matrixes. the minimum is -5mm. we change equation (20) as follow: x ( t ) = Ax ( t ) + Bu ( t ) y ( t ) = Cx ( t ) g ≈ ⎡ P B Q 2− 1 B T − A T ⎤ ⎣ ⎦ −1 C T Q1 z (27) The state curve on optimized control submits to the follow equation: (28) x ( t ) = ⎡ A − B Q 2− 1 B T P ⎤ x ( t ) + B Q 2− 1 B T g ⎣ ⎦ The optimized control u * ( t ) contained state variable x ( t ) . Fig. The realization of the optimized control The above optimized control problem belongs to multi tracker problem in the optimized control.3 and Fig. Introduced the performance fonctionelle is convenience. construct the closed loop feedback control on base of all state variable .The Fig. The explanation are: in time segment ⎡ t 0 . Application of the linear quadratic optimized control The control algorithm which is presented by the linear quadratic optimized control is the function of system state variables. the solution as follow[9] : The optimized control: (25) u * ( t ) = − Q 2− 1 B T P x ( t ) + Q 2− 1 B T g Where P. Realization of The linear quadratic optimized control 4. t f ⎤ . the coupling influence is seriously. the closed loop optimized control by state feedback can be obtained. at the same time.3 shows the curves of given driving force and practical driving force on base of non compensated model . The given position curve is a standard sinusoid too. Due to the state variables change at any moment. ⎣ ⎦ looking for the optimized control u ( t ) .

6 . no. the position error decrease. 2007. Modern control theory. 06. pp.influence near by the minimum force value is restrained. no. 1989. Practical force (N) Given force (N) compensated state-space model is a simplified double input and double output system. pp. 3. no. 2002. Zhi LIU. Hydraulic control system. JOURNAL OF SIMULATION.6 The given position and practical position after compensation 6. vol.4. 24. [4] Liwen WANG. pp. Xiao-dong LIU. pp. 765768. ACTA ARMAMENTII. 08. [9] Bao LIU. vol. Time (s) Fig. pp. pp. 2467-2471.28. Conclusion The force control circuit and the position control circuit in one electro-hydraulic servo system are compensated by unchanged structure theory simultaneously. Weiguo LIU. no. Optimization design theory of hydraulic servo system. [2] Yongjian FENG. Beijing: National defence industry press.4 The given force and practical force after compensation Given position (mm) Practical position (mm) Time (s) Fig. MACHINE TOOL & HYDRAULICS. the optimized control formula of the synthesis control of force and position on total state feedback have been deduced.5 The given position and practical position before compensation Given position (mm) Practical position (mm) Time (s) Fig. pp. “ Neural network robotic control for a reduced order position/force model ” . 28. [7] Xianlun WANG. 229-282. no. The practical position curve is more close to given position curve . [6] Yun ZHANG . 272-292. ACTA AERONAUTICA ET ASTRONAUTICA SINICA. 87-89. “The Self-adaptation Control Model for Eliminating Surplus Force Caused by Motion of Loading System”.5 and Fig. 2004. 2007. Beijing: Metallurgical industry press. Beijing: China machine press. established system state-space model on base of the compensated transfer function. vol. “ A Study of Suppress Strategy for Extra Force on Control Loading Hydraulic Servo System ” . “ Neural-network Internal Feedback Control for Electro-hydraulic Servo Loading ”. “Motion Synchronization of Electro-hydraulic Servo System”. 690-694. 541-545.36. “ Intelligent Force/Position Control of Robotic Manipulators Based on Fuzzy Logic ”. 07. this 77 . MACHINE TOOL & HYDRAULICS . pp. References [1] Changnian LIU. 2008. 55-67. CONTROL THEORY & APPLICATIONS . introduced the linear quadratic performance fonctionelle. The simulation result indicated.11. 2007. XinZHOU. Yuxia CUI. [8] Hongren LI. 2007. vol. pp. The control performance of system have been improved obviously. [3] Xinmin WANG. The speed of system response increase. no.6. utilized compensated state-space model. vol. Yuhua LI.. Jing HUANG.1. 4.5355. Compared the Fig. 1981. 19. [5] Changchun LI. Yadong MENG.

.

International Conference on Computer Engineering and Technology Session 2 .

.

one at a time. False Positive Rate (FPR). 7. FS approach and fuzzy inferencing systems. and odds ration. and testing time.ca. The most used techniques in this area are the Chi-Square measure. it uses the underlying characteristics of the training data to evaluate the relevance of the features or feature set by some independent measures such as distance measures. It is simple and fast. and Genetic search. This method. 15. and consistency measures [17.2009. 14. BACKGROUND This section introduces a brief description of Support Vector Machines (SVMs) and features ranking.ca Abstract—This paper introduces a novel algorithm for features selection based on a Support Vector Decision Function (SVDF) and Forward Selection (FS) approach with a fuzzy inferencing model. 9. It provides better performance in the selection of suitable features since it employs a performance of learning algorithm as an evaluation criterion. several researchers have proposed identifying important features through wrapper and 978-0-7695-3521-0/09 $25. support vector machines. thus improving the overall performance of the classifier and overcoming many problems such as the risk of “overfitting. 10] selects the “best” subset of the original features. or noisy data. In general. features are selected stepwise. 5] involves the production of a new set of features from the original features in the data through the application some mapping. A. The most used wrapper methods are Forward Selection (FS). INTRODUCTION Dimensionality reduction [11. features ranking. support vector decision function. Backward Elimination (BE). The dominant features extraction techniques are Principle Component Analysis (PCA). the Fuzzy ESVDF improves overall system performance of many applications. Fuzzy Enhancing Support Vector Decision Function (Fuzzy ESVDF). it helps us to understand the data and reduces the measurements and storage requirements. Keywords-Features selection. This elimination also simplifies the classification by searching for the subset of features that best classifies the training set.uwaterloo. training time.36 81 filter approaches. In contrast. Sugeno fuzzy inferencing model. Linear Discriminant Analysis (LDA). Features extraction [4. 12] is an important topic in machine learning.00 © 2009 IEEE DOI 10. This paper introduces a new features selection and ranking method. II. the Fuzzy ESVDF algorithm produces an efficient features set and. On the other hand. and allows the extraction of perfectly interpretable rules. then applying FS with the fuzzy inferencing model to rank the feature according to a set of rules based on a comparison of performance. the filter method does not use any machine learning algorithm to filter out irrelevant and redundant features. 16] exploits a machine learning algorithm to evaluate the goodness of features or feature set. information gain. they tend to be much slower than feature filters because they must repeatedly call induction algorithm and must be re-run when a different induction algorithm is used. integrates the Support Vector Decision Function (SVDF) and Forward Selection (FS) approaches with the fuzzy inferencing model to select the best features set as an application input. Current dimensionality reduction methods can be categorized into two classes: features extraction and features selection. Using a fast and simple approach. The wrapper method [13. It reduces the number of features and removes irrelevant. instead. Moreover. We have examined the feasibility of our approach by conducting several experiments using five different datasets. but they can achieve better results than filters due to the fact that they are tuned to the specific interaction between an induction algorithm and its training data. 18]. Elimination of useless (irrelevant and/or redundant) features enhances the accuracy of the classification while speeding up the computation. thus. wrapper approaches demand heavy computational resources. In terms of features selection.2009 International Conference on Computer Engineering and Technology Features Selection using Fuzzy ESVDF for Data Dimensionality Reduction Safaa Zaman and Fakhri Karray Electrical and Computer Engineering Department University of Waterloo Waterloo. I.” Moreover. The SVM classifier computes a hyperplane that separates the training data in two different sets corresponding to the desired classes. provides an effective solution to the dimensionality reduction problem in general. In the new algorithm.1109/ICCET.uwaterloo. correlation measures. However. features selection [6. SVMs and Feature Ranking SVM is a machine learning method that is based on statistical learning theories [20]. karray@pami. redundant. by using SVDF to evaluate the weight value of each specified candidate feature. the SVM classifier picks the hyperplane that maximizes the margin. Since there are often many hyperplanes that separates the two classes. The experimental results indicate that the proposed algorithm can deliver a satisfactory performance in terms of classification accuracy. Fuzzy Enhancing Support Vector Decision Function (Fuzzy ESVDF). Canada szaman@pami. 8. Training SVM is a quadratic .

the algorithm starts by picking three features from the features set (S1) with the highest weight values (S1 contains all the features with weight values equal to or greater than one. Database. Knowledge base. it results in a combination that comes close to the optimum solution. one at a time. as in (5). otherwise. Through this process. and a defuzzification process that maps output control values from fuzzy sets back to crisp numerical values. if the added variable increases the system evaluation criterion. the initial variable (feature) is the feature with maximum weight value (the weight is calculated using (4)). then it belongs to the positive class. which is a short term memory that contains the current status of the problem. x is the input. it will be removed. it will stay in the features subset. which contains knowledge and expertise in the specific domain. J =1 i =1 n n (1) (2) Subject to C ≥ α i ≥ 0. two types of comparisons are made: a local fuzzy comparison and a global fuzzy 82 . however. The feature with the next highest weight value from the features set (S1) is added to the features set (S2) while calculating their performance metrics. the second model (Sugeno model) generates crisp consequence for each rule and then the overall crisp output is obtained using the weighted average method. An inference model consists of three basic components: 1. y is the output. The most common ones are the Mamdani fuzzy model. x j ) 2 i =1. any new information that is generated from external sources (sensors or human interfacing) is stored in the database. if the added feature increases the system performance. Through this process. However. n F ( X ) = ∑ Wi X i + b i =1 (5) B. The contribution weight of each feature can be calculated using Support Vector Decision Function (SVDF). then the i th feature is one of the major features for the positive/negative class. The output of the first model is a fuzzy set that is obtained by aggregating the qualified fuzzy sets using the min max compositional rule of inference. and C is trade off parameter between error and margin. which is the driver program of the knowledge based system. ∑ α i yi = 0 i =1 For testing with a new data z : s f = w.optimization problem with bound constraints and linear equality constraints. if X i has a positive value. The FS approach starts with the initial variable being selected for the features subset. The FS approach does not necessarily find the best combination of variables. Fuzzy Inferencing Systems A fuzzy inferencing model (also called knowledge based system) is a system that is able to make perceptions and new inferences or decisions using its reasoning mechanism. 3. z ) + b j =1 s (3) (4) w = ∑ α tj ytiφ ( xtj ) j =1 Where α is a lagrange multiplier. Specifically. it contains domain specific facts and heuristics that are useful for solving problems. and then the predictor variables are added one by one to the features subset. the weight value is calculated using (4)) and putting them in the features set (S2) and then calculates the classification accuracy and training time for features set (S2). K is a kernel function. and Sugeno fuzzy model [21]. the inference engine operates on the knowledge in the knowledge base to solve problems and arrives to conclusions. otherwise. Moreover. This reasoning mechanism also involves a fuzzification process that transforms input values from crisp numerical values into fuzzy sets. This approach yields nested subsets of variables. b is a bias value. by interpreting the meaning of the new coming information to the database within the capabilities of the existing knowledge base [21]. Depending on the new coming data to the database. Fuzzy ESVDF. 1 n max W (α ) = ∑ α i − ∑ αiα j yi y j K ( xi . inference states. Inference engine. while evaluating the weight (rank) of each specified candidate feature by using (4). b . Therefore. otherwise. if it is close to zero. and the system evaluation criterion is the system III. If Wi is a large positive/negative value. Defuzzification is therefore required on the output in order to produce a crisp control signal. There are various fuzzy inferencing models. but it is not guaranteed to find the optimal solution. PROPOSED APPROACH (FUZZY ESVDF) We propose a new features selection approach based on SVDF and FS with a fuzzy inferencing model. This decision function is a simple weighted sum of the training patterns plus a bias. As shown in Fig. selects the features in a stepwise way. C. the i th feature does not contribute significantly to the classification. Through this process. In our case. 2. it will be removed. and the history of solutions to date. it belongs to the negative class. 1. Forward Selection (FS) Approach The Forward Selection (FS) approach [19] selects the most relevant variables (features/attributes) based on stepwise addition of variables. variables are progressively incorporated into larger and larger subsets. “the inference engine”. performance results (the classification accuracy and the training time). otherwise. This approach. φ ( z ) + b = ∑ α tj ytj K ( xtj . The process continues until either all variables are used up or some stopping criterion is met. The value of F ( X ) depends on both the X i and Wi values. it will stay in the features subset. In the FS approach. where the absolute value of Wi measures the strength of the classification. Sugeno model is used by the work of in this paper.

83 . Pick the first three features as an initial features set (S2). otherwise. If accuracy decreases. If training time has no changes and accuracy has no changes.” and. the algorithm will stop and the features set (S2) will be the selected features set. then continue adding feature [3] Figure 1. then the feature is non-important 2. Pick the Global accuracy If Accuracy41 >= Accuracy Global = Accuracy Else Global = Accuracy41 End if [2] Create the features set Sort the features set(S1)in descending order depend on its weight values. If training time decreases and accuracy decreases. Train41). count_loop = 0. If training time decreases and accuracy increase. 2. The first and the second input variables (percentage of change in the training time and in the accuracy) are represented by three fuzzy sets: “increase.” and “decrease” with their corresponding membership functions. it is kept in the features set (S2). If accuracy unchanges. Else continue_loop= 1. The system has one output ranging from “0” to “1” where “0” represents a non-important feature. then the feature is important 7. “Increase” refers to the case where the percentage of change (accuracy and time calculated by current selected features – accuracy and time calculated by previous selected features) in the training time and accuracy is slightly positive. The knowledge base is implemented by means of “ifthen” rules. The system has one output ranging from “0” to “1” where “0” represents a loop to continue. End if End if End while End if The selected features set = S2 “decrease” with their corresponding membership functions as shown in Fig. then the feature is non-important 8. then the feature is important 4.” “same. count_loop = count_loop + 1. Calculate the Accuracy and Training time of S2 (Accuracy1. The local fuzzy comparison compares the performance of the features set (S2) with the performance of the previous features set (S2). 3.” “same. This input variable is represented by three fuzzy sets: “increase. If training time increases and accuracy increases. Else Accuracy1 = Accuracy2. then the feature is non-important 5. The local fuzzy comparison is ranked according to a fuzzy system that takes two inputs: the percentage of increase or decrease in the training time as one input. The comparison is ranked according to a fuzzy system that takes only one input variable (percentage of change in the accuracy). as shown in Fig. then stop adding feature 2. If the classification accuracy of the features set (S2) is equal to or greater than the global accuracy value. The Fuzzy ESVDF Algorithm. If training time has no changes and accuracy increase. If training time increases and accuracy has no changes. Calculate the accuracy and training time of S2 (Accuracy2. “Same” refers to the case where there is no change in accuracy. then the feature is non-important The global fuzzy comparison compares the classification accuracy of the features set (S2) with the global accuracy. If (Global equal or less than Accuracy1) continue_loop = 0. which is equal to the minimum of two values: the accuracy of all the features and the accuracy of the features set (S1). “Increase” refers to the case where the percentage of change (selected features set accuracy – global accuracy) in the accuracy is slightly positive. If training time increases and accuracy decrease. This means that the training time and accuracy slightly increase after a feature is added. It compares the performance of the features set (S2) with the performance of the previous features set (S2). Train1= Train2. if the first value is less than the second value. then stop adding feature 3. the added feature is ignored. and “1” represents an important feature in the detection process. Nine rules are needed to describe the system and rank each feature as “important” or “non-important. then the feature is important 3. If accuracy increases. Do while (continue_loop ==1) & (count_loop <= length(S1)) Add the next feature f(i) from S1 into S2.Train2) If (Accuracy2 less than Accuracy1) and (Train2 greater than Train1) Remove f(i) from S2. “Same” refers to the case where the training time and accuracy remain almost the same. then the feature is non-important 9. and the percentage of increase or decrease for the accuracy as the second input. [1] Calculate the Global Accuracy Calculate the accuracy and training time of all (41) features (Accuracy41. If training time decreases and accuracy has no changes. Train1) If (Global equal or less than Accuracy1) Exit. This means that the training accuracy slightly increases after a feature is added. Only three rules are needed to describe the system and decide whether or not to add features to the features set (S2): 1. If training time has no changes and accuracy decreases. the classification accuracy of the features set (S2) is compared with the global accuracy.” according to the following rules: 1. Train).comparison. count_loop = count_loop + 1. and “1” represents a loop to stop. then the feature is important 6. Calculate the accuracy and training time of the features with weight >= 1 (Accuracy. In the global fuzzy comparison. The knowledge base is implemented with three “if-then” rules.

Each experiment was repeated ten times with a random selection of the training and the testing data with different ratios. The fourth dataset. and the fifth one is KDD Cup 1999 data [23].S.52 %) and 241 are malignant samples (34. we choose five different datasets. about 40% of the samples were randomly selected as the testing dataset. describes characteristics of the cell nuclei present in the image as either benign or malignant [22]. of which 357 are benign samples (62. and the fifth one is KDD Cup 1999 data [23]. describes the diagnosis of cardiac Single Proton Emission Computed Tomography (SPECT) images [22]. we initially describe the content of different datasets and the experimental settings. Wisconsin Diagnostic Breast Cancer (WDBC). and 70/30).4 %). of which 3000 are normal samples (50 %) and 3000 are attack samples (50 %). which are ((training %) / (testing %): 50/50.6 %) and 212 are abnormal (79. When plotted in order (from 1 through 100) on the Y coordinate. This dataset contains 1212 samples. Each instance is characterized by nine attributes. the points will create either a Hill or a Valley [22]. we apply the proposed algorithm (Fuzzy ESVDF) to select the appropriate features set for each dataset.26 %). the Hill and Valley Dataset. Each instance is characterized by 41 attributes. The first dataset. This dataset contains 6000 samples. Input 1: Accuracy Output: Action Figure 3. and IDS dataset). we validate the results by using any classifier type. Each instance is characterized by 44 attributes. four datasets are taken from the UCI Irvine Machine Learning Repository [22]. and then to evaluate these selected features using both Neural Networks (NNs) and Support Vector Machines (SVMs). Hill and Valley dataset.48 %).5 %). of which 458 are benign samples (65. the remaining 60% were used as the training dataset. NN and SVM classifiers are used to evaluate the proposed algorithms in the second step. The last dataset. Datasets Description The Five real datasets are considered.5 %) and 600 are Valley samples (49. each record represents 100 points on a two-dimensional graph. WBC dataset. EXPERIMENTS AND RESULTS For evaluating the performance of our proposed approach. the SPECT Heart Dataset. WDBC dataset. We carried out five validation experiments for each different dataset (SPECT Heart Dataset. The objective is to select a subset for the features using the Fuzzy ESVDF algorithm. contains TCP/IP dump data for a network by simulating a typical U. Air Force LAN to configure and evaluate intrusion detection systems. 40/60. Follow by some experimental results and discussion. 60/40.74 %) and 212 are malignant samples (37. four datasets are taken from the UCI Irvine Machine Learning Repository [22]. Experimental Settings Our experiment is divided into two main steps. describes characteristics of the cell nuclei present in the image as either benign or malignant [22]. B. First. Each time. Each instance is characterized by 30 real-value attributes. the proposed algorithms were applied ten times with training and testing data. It contains 267 samples. Sugeno Fuzzy Inferencing Model for the Local Comparison IV. This dataset contains 569 samples. 30/70. In this section. Output: Feature Rank Figure 2. The second dataset. Sugeno Fuzzy Inferencing Model for the Global Comparison 84 . Second. Each instance is characterized by 100 real-value attributes. of which 612 are hill samples (50.Input 1: Time Input 2: Accuracy In the third dataset. This dataset contains 699 samples. of which 55 are normal (20. Wisconsin Breast Cancer (WBC). In the first step. KDD Cup 1999 data [23]. A.

588 42. The different datasets are compared with respect to different performance indicators: number of features.109 0. Dataset SPECT Heart WDBC Hill and Valley WBC COMPARISON OF DIFFERENT DATASETS USING NNS No. To evaluate our approaches.77 % FPR 0. Table 1 shows a significant improvement in classification accuracy.270454 sec to 0.41 sec) and testing time ( it is reduced from 0.6 %). training time.030 Selected Set (5) Complete Set (44) Selected Set (4) Complete Set (30) 94.013 Selected Set (11) CompleteSet(100) 77. For the WBC dataset.002 0.14 % 66.73% 69. classification accuracy and FPR for the selected seven features does not show significant differences to that of using all 41 features.063 0.047 Selected Set (3) Complete Set (9) 95. TABLE I. and testing time are also improved. and FPR (it is reduced from 0.03932. On the other hand. However. Table 3 compares execution times for Fuzzy ESVDF approach for the five different datasets.003 0.67 %).019 0.031 206.61% 99.01 0.0064. and testing time based on the selected features set are very near to the accuracy of the system based on the complete features set.189 Training Time 33.054 sec).68 to 221. for the IDS dataset. For the Hill and Valley dataset. which is reduced from 0. Attributes Selected Set (5) Complete Set (44) Selected Set (4) Complete Set (30) Selected Set (11) CompleteSet(100) Selected Set (3) Complete Set (9) Acc. 74.082 0.012 0.00 0.63 % 0. False Positive Rate (FPR is the frequency with which the application reports malicious activity in error).45 0. FPR. FPR.42% 69.152 0.016 0.031 sec. the NN classifier shows significant improvement in training and testing time. For the SPECT Heart Dataset.85 2. the number of features is reduced from 30 to 4 (cut nearly 86.015 IDS Selected Set (7) Complete Set (41) 99. However.049 0.475 0.825 sec to 105.33 0. the number of features is reduced from 9 to 3 (cut nearly 66. The results of the classifier SVMs for Fuzzy ESVDF for all datasets are presented in Table 1.591 0.012 0. The seven selected features from the Fuzzy ESVDF algorithm reduce the training time from 911. The results of the classifier NNs for Fuzzy ESVDF for all datasets are presented in Table 2. and IDS dataset) to select the best features set for the application.039 0.7376) based on the selected features.128536 to 0. by using SVMs. For the Hill and Valley dataset. Finally.72% 99.738 72. the number of features is reduced from 100 to 11 (cut nearly 89 %). training time. there is obvious improvement in FPR.84 % 0.92 %).476 23.17 sec to 0. 76. classification accuracy.036 0. 85 . However.015 0.43% 96.77% 96.17 IDS Selected Set (7) Complete Set (41) For the WDBC dataset. as shown in Table 1.006 0.93 % 75.41 5.94 sec and the testing time from 0.940 325. training time.05 % 0.182 sec to 2. Finally.452 0. and testing time for the selected 11 features (attributes) from the complete 100 features set. Table 2 shows that the classification accuracy for both features sets (11 selected features and the entire 100 features) are nearly the same. On the other hand. we used NN and SVM classifiers to classify a record as being either zero or one (binary classification).85 % 94. as shown in Table 1. Experimental Results Fuzzy ESVDF was applied to the different datasets (SPECT Heart Dataset. FPR.73 %. The five selected features from the Fuzzy ESVDF algorithm reduce the training time from 325.65 % 0. For the WDBC dataset.144 sec to 38. However.134 0.03 to 0.035 0. the classification accuracy. Attributes Acc. D.C. as shown in Table 1. and using the entire 30 features) are nearly the same.054 0. for the SVM classifier.047 0.012 0.65% 96.019 0. which is better than the classification accuracy for the complete features set.182 Testing Time 0.93 sec and the testing time from 0.13% 66. classification accuracy.0186 sec to 0. Hill and Valley dataset.678 0.047.376 0. by using SVMs. and testing time between using the selected three features and all nine features.6376 sec).270 105.680 0.006 Training Time 0.70% FPR 0. The proposed algorithm reduces the training time from 206. as shown in Table 1. The classification accuracy and FPR are also improved.43 %.129 0. the classification accuracy based on the selected features set is 76. by using SVMs. Dataset SPECT Heart WDBC Hill and Valley WBC TABLE II.638 2488. and testing time.534 Testing Time 0.075 to 0.075 COMPARISON OF DIFFERENT DATASETS USING SVMS No. Table 2 shows no significant improvement in classification accuracy.928 911. training time. testing time (it is reduced from 0. Discussion As shown in Table 1 and Table 2. there is a dramatic reduction in the number of features for all datasets after the application of Fuzzy ESVDF. WBC dataset. there is obvious improvement in training time (it is reduced from 2488. For the SPECT Heart Dataset. which is equal 69. the overall system performance is improved based on the selected features set. the number of feature is reduced from 41 to 7 (cut nearly 82. and testing time in both experiments (using the four features selected from the Fuzzy ESVDF algorithm. Table 2 shows significant improvement in training time for the selected features.085 0. Moreover. there is an obvious improvement in the training (it is reduced from 5.64 % 99.016 0. the training time is improved (it is reduced from 72.83 0.53 sec to 33.084815 sec).75% 94. FPR. On the other hand.002 221. WDBC dataset.7 %). The NN classifier shows obvious improvement in training and testing time. for the IDS dataset. as shown in Table 2.059 43.088 0. The classification and FPR in both experiments are nearly the same. However.4746 sec to 43.0128 sec).069 38. For the WBC dataset.91 % 95. the number of features is reduced from 44 to 5 (cut nearly 88.144 0.

because the ranking approach depends on the system performance (classification accuracy and training time) that is calculated by SVMs.TABLE III. As is shown. In summary. Moreover. of Selected Set 5 4 11 3 7 Execution Time (sec) 125. We are planning also to improve our approach and decrease its execution time. the experimental results demonstrate the feasibility of the proposed approach. for features selection based on SVMs and FS with the fuzzy inferencing model. Thus.215 3000. Feature Selection for Intrusion Detection System Based on Support Vector Machine. 15. The Fuzzy ESVD approach improves the classification process by optimizing the selected features SVDF and studying the correlation between features through the application of the Forward Selection (FS) approach. Future work will involve planning to investigate the possibility and feasibility of implementing our approach in real time applications. evaluating features weights through SVDF and studying the correlation between these features through the application of the FS algorithm enables the approach to select efficiently the appropriate features set for the classification process. but. With SVMs. the efficiency of Fuzzy ESVDF depends on how SVMs are able to classify the dataset. The proposed approach (Fuzzy ESVDF) has many advantages that make it attractive for many features selection applications. Zaman. EXECUTION TIME COMPARISON FOR THE DIFFERENT DATASETS No. which may obstruct the modification and maintenance processes and impede the use of this approach in some types of applications. this approach is simple and efficient and it does not require parameters initialization. gives the best performance in terms of training and testing time while retaining high classification accuracy regardless of the classifier used. Karray.792 sec Dataset SPECT Heart WDBC Hill and Valley WBC IDS Comparing the different datasets according to execution time. Consequently. as we showed through our experiments. ESVDF is considered to be a selection features approach regardless of the type of classifier used. Moreover. The proposed approach. Our new approach overcomes these limitations by proposing a Fuzzy Enhanced Support Vector Decision Function (Fuzzy ESVDF). The proposed approach. Unpublished [2] 86 . when the number of features is around 50 features. we used SVM and NN classifiers with five different datasets (SPECT Heart Dataset. and F. Unpublished S. can be used to replace the complete features. Zaman. To evaluate the proposed approach. 6 or 11 as it is mentioned in the previous works) as an indicator for highest rank value may affect system performance. CONCLUSIONS AND FUTURE WORKS SVDF is used to rank the input features by giving a weight value for each of them.13 January 2009. Fuzzy ESVDF approach for Intrusion Detection Systems. the SVDF used in this approach also depends on SVMs. of Complete Set 44 30 100 9 41 No. but in all situations. Karray. it gives a satisfactory features number with excellent performance results. Moreover. Table 3 shows that the proposed approach. Hill and Valley dataset. execution time increases greatly (more than doubling). 2009. The fuzzy inferencing model is used to accommodate the learning approximation and small differences in decision making steps in the FS approach. it combines good effectiveness with high efficiency. V. facilitating the retention or modification of the system design and allowing this model to be used in a real time environment. Fuzzy ESVDF. Finally. In addition. making this approach a suitable features selection method for other applications. Also.725 134.. On the other hand. May 26-29. Fuzzy ESVDF. the algorithm is reasonably fast and when this number doubles (to 100 features). thus. On the whole. by using the weights only.093 87. the number of features is nine and when the number of features triples (the case of the WDBC dataset). making this approach a suitable features selection method for many applications. the selected features subset is representative and informative and. the proposed algorithm is simple and does not require that many parameters be initialized. Fuzzy ESVDF. 6th Annual IEEE Consumer Communications & Networking Conference IEEE CCNC 2009.025 0. this amount of time does not depend on the number of features only. Third. Second. applied the SVDF based on FS with the fuzzy inferening model to select the most effective features set for an application. so it provides an effective solution to the dimensionality reduction problem in general. It produces an efficient features subset. The IEEE 23rd International Conference on Advanced Information Networking and Applications (AINA-09). the execution time more than triples. 16]. WDBC dataset. satisfactory performance could be obtained much more easily than with other approaches. it can not guarantee the optimal solution in terms of minimizing the number of features to the least number. employing a reduced number of features by SVMs may be more advantageous than other conventional features selection methods [13. and F. we are unable to specify the appropriate features set for a detection process because selecting features with the highest rank values (weight) can not guarantee that combining these features can create the best features set based on the correlations among candidate features. 24. as proposed in previous works [14. this approach is considered to be a selection features approach regardless of the type of classifier used.g. however. 10 . 25]. becomes slow when the number of features increases to 100 features. First. Moreover. The advantage becomes more conspicuous for many applications as our experiments show. WBC dataset. in the case of WBC dataset. and IDS dataset). limiting the number of selection features to a specific value (e. REFERENCES [1] S. it depends on how fast SVMs are. The experimental results demonstrate that our approach can reduce training and testing time with high classification accuracy.

A. A. Sung. Guangzhou. “A Bayesian Approach to Joint Feature Selection and Classifier Design. Feature Ranking and Selection for Intrusion Detection Using Artificial Neural Networks and Statistical Methods. and S. L. 1. Consistency based feature selection. M. Mukkamala. July 16-21. and J. 1987. Technical Report. 2004. Rubanau. no. Bilings. 1-2. H. H. “Unsupervised Feature Evaluation: A Neuro-Fuzzy Approach. M. vol. Mitra.ll. Sung. Pattern Analysis and Machine Intelligence. IEEE Trans. no. Figueiredo. Optimization of Intrusion Detection through Fast Hybrid Feature Selection. 36. Page(s): 22-33. Fodor. Feature Selection for Intrusion Detection: An Evolutionary Wrapper Approach. http://archive. UCD-CSI2007-7. Page(s): 18-21. 2005. P. 9. Page(s): 301-312. and A. Khaja Mohammad Shazzad. Karray. Pattern Analysis and Machine Intelligence. 26. 2005. Motoda. R. June 2002. Page(s): 1231 – 1236. and J. 2004 IEEE International Joint Conference on Neural Networks. no. Center for Applied Scientific Computing. first edition. 2004 IEEE International Conference on Systems. 2006. vol. Law. 2002. and M. B. Page(s):4754 . Horeis.R. Page(s): 366-376. no.[3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] H. Pattern Analysis and Machine Intelligence. C. March 2000. Tools and Applications”. Hofmann. W. no. January 2007.” IEEE Trans. Page(s): 1105-1111. Almuallim and T. Mukkamala and A. F. 11. Page(s): 279-3051994. “Unsupervised Feature Selection Using Feature Similarity”. IEEE Transaction on Pattern Analysis and Machine Intelligence. Gao. and B. “soft Computing and Intelligent Systems Design: Theory. A Survey of Dimension Reduction Techniques. A. January 2000. Pal. Cunningham. vol. IEEE Trans. March 2002. 22. Jong Sou Park. Proceedings of the 4th Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD 2002). 1. Page(s): 2734-2739. September 2004. Webb.edu/mission/communications/ist/index. Mukkamala. no. Jain. Silva. Page(s): 162-166. 26.” IEEE Trans. X.uci. vol. Technical Report. vol. 2. Basak. Kochurko and U. Golovko. De. August 2005. Artificial Intelligence. S. Smola. 25-28 May 2003. Lawrence Livermore Nat’l Laboratory. Technical Report UCRL-ID-148494. Page(s). International Joint Conference on Neural Networks.Sung. I. Sick. 2006 International Joint Conference on Neural Networks (IJCNN’06). 9. Selection of Variables to Preserve Multivariate Data Structure Using Principal Components. Statistical Learning Theory and Support Vector Machines. 24.html V. second ed. 69. "Learning Boolean Concepts in the Presence of Many Irrelevant Features". IEEE Trans.edu/ml/ http://www. IEEE 2001. Statistical Pattern Recognition. 3. Krzanowski. A. S. 2007. 25-29 July 2004. Pal. Hartemink. Duin. Jain. Pattern Analysis and Machine Intelligence. and K. S.T. [25] H. A. 2002. Dimensionality Reduction and Attack Recognition using Neural Network Approaches. H. Page(s): 264 – 267. Carin. Wei and S. P. Murthy. and H. Wiley. 2007.4761. Yang. nos.ics. 2007. (PDCAT’05). The 12th IEEE International Conference on Fuzzy Systems. 10-13 Oct. Figueiredo. Dietterich. Applications and Technologies. Simultaneous Feature Selection and Clustering Using Mixture Models”. A. Wang. 98109. Krishnapuram. 2003. C. R. Vaitsekhovich. Dimension Reduction. Dash. Ant Colony Optimization Based Network Intrusion Feature Selection and Detection. vol. Applied Statististics. A. Page(s): 3273 – 3278. 2004. L. Statistical Pattern Recognition: A Review. and A. Tamilarasan. Page(s): 11541166. M. Neural Networks. Liu . Feature Subset Selection and Ranking for Data Dimensionality Reduction. The Sixth International Conference on Parallel and Distributed Computing. August 8th. Detecting Denial of Service Attacks Using Support Vector MachinesS. A Framework for Countering Denial of Service Attacks. Page(s): 15631568. Page(s): 4-37. 12-17 Aug. Man and Cybernetics. 87 . September 2004. vol. Mao. Yendrapalli. 05-08 Dec. P. The Fourth International Conference on Machine Learning and Cybernetics.mit.

In this paper. 2006AA09Z113. Records show that strong winds with wind forces of 9 to 10 on the Beaufort scale hit H. 2006CB303000. the siltation of the Harbor changes in a moment. the sea depth can be estimated efficiently without the help of extra ranging devices. This strategy can modify the transmission delay adaptively depending on users together with the changeful condition of wireless links. Our experiment site lies in the east coastal area of Qingdao. Qingdao. real-time in many offshore monitoring. The experiment shows our work is succeeding in changing propagation delay by orders. INTRODUCTION There are significant interests in analyzing the siltation of the estuary and harbor. that is: fw(vi0 j0 ) = max{ fw(vi0 +1. Harbor. By locating such sensors. Let vij denotes node of the network. Monitoring sea depth costs this harbor more than 18 million US dollars per year. by Key Project of Chinese National Programs for Fundamental Research and Development (973 program) under Grant No. Yunhao Liu proposed a Restricted Floating Sensor (RFS) model. 10th to Oct. let p stands for the fixed transferring rate of nodes. For example. 1 c). Harbor is mainly affected by tide and wind blow.5m to 5. for short) is given as follows: w(vij ) = 1 (1 − BER) ( H + D ) (1) w(vij ) is the average number of packets required for successfully transferring D bits data. we discuss the PDC strategy. Throughout this paper. aggregation of all child nodes of where Because of the movement of nodes. 13th in 3 2003. we mainly describe the design of PDC in detail. The architecture of PDC is shown in Fig.2009 International Conference on Computer Engineering and Technology PDC: Propagation Delay Control Strategy for Restricted Floating Sensor Networks Liu Xiaodong College of Information Science and Technology Ocean University of China. H. Finally. the vi0 j0 is specified by the maximum (2) of FW of it’s child nodes. j )} + w(vi0 j0 ) j∈J i0 j0 So it is straight forward the first weight of leaf nodes is: fw(vdepth. In [1]. H. II. j ∈ J i0 j0 } denote the vi0 j0 in the RFS network.000 m of silt to the sea route. as shown in Fig. a longer response interval is necessary to reserve energy in better weathers. Keywords. H. which suddenly decreased the water depth from 9. i = i0 + 1. DESIGN OF PDC In this section. This strategy gets the Waiting Time (WT) by modifying the First Weight (FW) and the Second Weight (SW) dynamically depending on orders from users. Let I. and j for the sequence number of the same layer. the conclusion is given in Section IV. the second largest harbor for coal transportation (6. where i stand for the layer. Propagation delay.com Abstract—Restricted Floating Sensor (RFS) proposed by Yunhao Liu is an important model in offshore data collection. the amount of siltaanytion in H. On the other hand.00 © 2009 IEEE DOI 10.7 million tons per year) in China. It is easy to know that Then we give the definition of First Weight as follows: Definition 1: to any node First Weight (FW) of node vi0 j0 in the RFS network. There need a shorter propagation delay to ensure the safety of the channels. j ) (3) This project is funded by the Chinese National High Technology Research and Development Plan (863 program) under Grant No. Harbor from Oct.2009. To adjust propagation delay on request is required by RFS networks.7m and blocked most of the ships weighing more than 35 thousand tons. There is a need for highly precise.Bit error rate.232 88 .1109/ICCET.ouc@gmail. Firstly we give the design of the FW Module. H. The sea route has always been threatened by the silt from the shallow sea area. The storm surge brought 970. 978-0-7695-3521-0/09 $25. 1 a). In this situation. China liuxiaodong. we propose a Propagation Delay Control (PDC) strategy for RFS networks. Sensor network III describes our experiment and discusses results we obtain from experiment. we denote D as the length of packet data item and denote H as the length of header. For example. variables. the definition of the node’s weight for transmission time (weight. j ) = w(vdepth . In this paper we propose a Propagation Delay Control strategy (PDC) to control the transmission time. currently suffers from the increasingly severe problem of silt deposition along its sea route (19 nautical miles long). S (vi0 j0 ) and J i0 j0 are J i0 j0 contains all the sequence number of these nodes. For each node according to BER . We organize this paper as follows: in section II. Let V = {vij } . In this paper we assume all nodes transmit data at a fixed frequency. The highly variable nature of wind brings more intensive effects. Sections S (vi0 j0 ) = {vij | vij ∈ V .

j ) − sw(v1 j ) ) ΔT ( L j ) and wt (v1. where It is straight forward that sw(vdepth . j ) . The transmission time of nodes will be worked out according to the FW and SW. From sw(v1 j ) p d min (vi0 +1. j . j )} . j : It is clearly that at (v1. j ) is affected by many factors. To any other node in the subtree sw(vi0 j0 ) = max{ We and have × sw(vi0 +1. j . k ∈ J i0 j0 j∈J i0 j0 L1 stand for the That is v1. j )} . j . the at (v1. The topology is variable because of the restricted motility. a message carrying the information of ΔT ( L j ) and sw(v1. After getting the upper limit of propagation delay for each tw(vi0 j0 ) = max{ j∈J i0 j0 d min (vi0 +1.k ) . j )} ≥ 0 We have 89 . As discussed in the introduction. j ) Where tw(vi0 j0 ) = max{ j∈J i0 j0 d min (vi0 +1. Let subtree with the root of delay (denoted by sw(vi .Firstly we give a definition which is important in describing the variability: Definition 2: to any node vi0 j 0 in the RFS network. To nodes on different layers. at (v1.k ) < sw(vi−1. j . For sw(vik ) <1 sw(v1 j ) fw(vi0 j0 ) = max{ fw(vi0 +1. j ) . the following expression is used to work their own transmission time: d (vi0 +1. j and vi0 j0 . j ) d (vi0 +1. vi0 j0 ) × sw(vi0 +1.BER FW model T ΔT FW AT model SW AT SW d SW model PDC Strategy a) Our experiment site b) Topology model for RFS networks c) The architecture of PDC strategy Figure 1. vi0 j0 ) × sw(vi0 +1. Finally we give the design of WT unit. nodes in RFS network float within a restricted area. at (vik ) is × ΔT ( L j ) + . j ) = 0 In the PDC strategy. j ) d (vi0 +1. vi0 j0 ) denotes the shortest distance ever detected between at (vik ) = sw(vik ) fw(vik ) × ΔT ( L j ) + sw(v1 j ) p L j is less than T. let subtree. j ) d (vi0 +1. Then we have T ). j )} + w(vi0 j0 ) i. That is i >1 . j and it’s nearest parent node. With the limitation of propagation fw(vi . And the sink node will finally receive sw(v1. among which T is the most dominating. The vi0 j0 is specified by the Theorem 1: the propagation delay of Proof: to the root node distance between Second Weight (SW) of node summation of v1. j ) denotes the vi0 +1. j ) = ΔT ( L j ) + fw(v1. the sink work out the limit of sw(v1 j ) (6) propagation delay of L j : ΔT ( L j ) = T − p The adjust time of v1. j . Thus an amended method is needed to deal with the variability.k ) < fw(vi−1. that is: sw(vi0 j0 ) = tw(vi0 j0 ) + fw(vi0 j0 ) (4) ΔT ( L j ) + fw(v1. j . j )} + max{ fw(vi0 +1. vi0 j0 ) j∈J i0 j0 j∈J i0 j0 . (7) vi0 +1. j ) will be passed to nodes of subtree L j . j ) is w(vi0 j0 ) and the weighted sw(vi0 j0 ) . (5) p sw(vik ) fw(vik ) L j . j ) > at (vik ) where i > 1 As shown in (6). j ) p p From (4) we know sw(vi0 j0 ) = tw(vi0 j0 ) + fw(vi0 j0 ) . For + ( fw(v 1. j . PDC strategy for RFS networks Then we design the SW Module. j )  = T at (v1. each node should transmit their SW to their parent node. Let d min (vi0 +1.

Each node will work out the new adjust time easily by the following algorithm 3: III. 2003. j ) − sw(v1 j ) = fw(v1. 3(2): 426-457. Fig. [2] [3] [4] [5] 90 . the Adjustability of Response Time on Request TABLE I. Conditions one two CONDITIONS OF WEATHERS Factors wind scale wave height(m) 1-2 8-9 0. the transmission time of each node will be different. China. Current system consists of 23 sensor nodes deployed in the field. degree and etc. we take LEPS introduced by SUN Limin et al.19(2):40-50. So we will evaluate the performance of PDC. With limited power resources. Yunhao Liu. Mo Li. Here we compare PDC with other transmission time control mechanisms and discuss their effect on efficient transmission and energy saving. We use TelosB motes and Tiny OS as our development basis. 9 we get the adaptability in condition one is better than in other two worse conditions. This ensures the propagation delay is within the limitation. REFERENCES [1] Zheng Yang. Addison Wesley Publishing Company. 9 displays the adaptability to the change of propagation delay. We are deploying the working system in Qingdao. Except T. In this section. j ) − (tw(v1.. j ) + fw(v1. The experiment shows our work is succeeding in changing propagation delay by orders. elecommunications: Protocols and Design [M]. In our experiment. Spragins J D. reduce the number of collisions resulting in fewer retransmissions and could converge the propagation delay to a certain set point. These three figures are in different weather conditions. Hammond J L. We will discuss (7) is efficient in these different conditions.. Stankovic John A.2007: 469-478. However the complete system is designed to scale to hundreds of sensors covering the sea area off Qingdao. From (7) we know the transmission time of each node can 20 20 10 10 0 10 30 60 30 Interval of Packets 10 0 10 30 60 30 Requested Response Time 10 Figure 2. j ) ) ≤ 0 Requested Response Time 60 70 Interval of Packets Thus we have at (v1. Pawlikowski K. The propagation delay should be shorten when storm comes. that is t < T .Schurgers C.Abdelzaher T. From (6) we know ΔT ′( L j ) < ΔT ( L j ) . Tingxin Yan.. Tucson. The data centre of Ocean Sense has been launched and most of our data can be seen on internet. reporting sensing data continuously to the base station.ACM Trans on Embedded Computing System. The sink only needs to send a message containing the new limitation of propagation delay (denoted by t ) when there is a need to change the propagation delay. as the routing protocol. 33:112-114. FW.Park S. Thus our future work will focus on designing an improved strategy for large-scale RFS networks. Raghunathan V.et. From the discussion we know (7) is suitable for all these conditions. CONCLUSION In this paper we proposed a Propagation Delay Control strategy for large scale RFS networks.. Sea Depth Measurement with Restricted Floating Sensors. j ( fw(v ) =T + 1. Energy Aware Wireless Microsensor Networks[J]. AIDA: Adaptive Application Independent Aggregation in Sensor Networks [J].. under the time-sensitive requirement. And from (9) and (10) we know that because of differences over BER. IEEE Signal Processing Magazine. Blum Brian M. These conditions are shown in table I. we compare the performance PDC in terms of different wind scale and wave height. Boston. EXPERIMENT [2] it is vital for RFS nodes to minimize energy consumption during radio communication to extend the lifetime of the network.5 3 fw(vik ) × (ΔT ( L j ) − ΔT ′( L j )) be shortened by fw(v1 j ) In this case. The 28th IEEE Real-Time Systems Symposium[C]. j ) − sw(v1 j ) ) p 50 60 ≤T 50 40 40 30 30 From (7) we know that at (vik ) is also dominated by T . In the same way we can change the propagation delay back to normal to decrease the energy consumption when the storm is over. So different nodes won’t send data at the same time and this will decrease the probability of collision and ease network congestion. Principle and Performance Evaluation of Routing Protocol in TinyOS[J]. This algorithm modifies transmission time of nodes depend on the limitation of delay afforded by users. From Fig. From theorem 1 we know the adjust time of the root node is the longest in it’s subtree.fw(v1.. Because the condition of channel between nodes is variable in different weathers. all information is stored in the memory in this strategy. This is mainly because the interval between nodes caused by bad weathers. Computer Engineering. From the introduction we know there are various conditions leading to changes of the propagation delay. He Tian. IV. all nodes just transfer data gathered within the limitation of delay in stead of waiting for every child node. Limin Sun. we evaluate the performance of PDC strategy.2002. All of these will be helpful to dampen congestion.1991.

Since the final user of the video stream is a human. low bit rate two way communication links etc). 6 Aravind Reddy2 aravind_k_iiitm@yahoo. thus avoiding computationally expensive operations of inverse DCT and DCT. we attempt to help with the problem of video content delivery to heterogeneous end devices over channels of varying capacities by proposing a fast and efficient temporal transcoding architecture. In temporal transcoding of videos. In this paper.iiitm@gmail. unlike some architectures. the biological features of human visual system should be considered and the priority in preserving data should be based on these features. lossy channels (especially wireless links. An overview of the other approaches for video content adaptation can be found in [1. A video may be considered as a stream of frames played at quick succession with very short time intervals. 2 and 6]. motion based frame skipping.2009 International Conference on Computer Engineering and Technology Fast and high quality temporal transcoding architecture in the DCT domain for adaptive video content delivery Vinay Chander1 vinay87@gmail. First of all. motion vector composition and prediction error re-calculation 1. which can be made use of by dropping the less important frames . Our architecture operates entirely in the DCT domain. Video content delivery has many issues associated with it. short time interval between consecutive frames means that the contents should be very close to each other. huge file sizes and the large variety of end terminals with different constraints. both bilinear interpolation vector selection (BIVS) and forward dominant vector selection (FDVS) are used. 5. Introduction The eminence of digital video on the internet is increasing by the day. This fact brings about an important feature which is exploited to reduce the transmitted data size. which operates completely in the compressed domain [3. MP. 12].2009.com Shriprakash Gaurav3 gaurav. Thus we propose an architecture which adapts the incoming compressed video to the end terminal’s frame rate constraint as well as the bandwidth of the available channel by using a motion based frame skipping algorithm [5. We propose an efficient temporal transcoding architecture in which motion change is considered.com 1. Keywords: Video transcoding.co. 4. which operate partially or fully in the pixel domain.in Manish Kakhani5 manishkakhani@gmail.Motion activity gives a measure of the motion in a frame and is defined as the sum of the motion vector components in that frame. Indian Institute of Information Technology and Management. 5 and 14].agnos@gmail. In this paper. we limit ourselves to solving the problem of bandwidth and end terminal frame rate requirements in the context of video content distribution over the internet. The macro block coding types are re-judged to reduce drift errors.1109/ICCET. 11] to skip frames. We have implemented the algorithms and carried out experiments for Mpeg 1 video sequences and the results are found to be promising.00 © 2009 IEEE DOI 10. DCT domain. This is due to the large 978-0-7695-3521-0/09 $25. since it has 91 Video communication over the Internet is becoming increasingly popular today.com Shashikala Tapaswi6 stapaswi@iiitm. If the motion activity is larger than a given threshold. which is found to be better.com Nishant Khanwalkar4 nis. This method involves calculation of motion activity of the frames . The re-encoding errors are also minimized. For motion vector composition. Transmission of videos over networks has always been a problem due to reasons like high bandwidth requirement. 2. which is viewed and interpreted by the end user(s). Gwalior. 3. pre-encoded frames may be dropped from incoming sequence to freely adjust the video to meet network and client requirements [4. We use the modified definition of motion activity [11].ac. INDIA Abstract variety of end devices that exist in today’s internet and also because of the varying constraints of the channels that make up networks [7]. the frame cannot be skipped.174 .in Department of Information Technology.

15]. Finally. This is followed by the calculation of motion activity for each frame in the video. The architecture is described in greater detail in the next section. we explain our transcoding architecture. Our architecture allows skipping of B frames as well as P frames. in this paper we present a fast frame skipping transcoding architecture that operates entirely in the DCT domain. the quantized DCT coefficients of residual signal of non-skipped frames become invalid as they may refer to the reference frame(s) which no longer exist(s) in the transcoded bit-stream. where in both bilinear interpolation vector selection (BIVS) [8] and forward dominant vector selection (FDVS) [9] are used. for performing the recalculations and validating the transcoded frames. DCT. in which inverse quantization is carried out.The pixel-domain approach for re-calculating the prediction residual involves high computation due to the computationally expensive operations of inverse DCT. motion vectors and the quantized DCT information for each macro block. This differs from [4]. For this. it becomes necessary to re-compute the new set of quantized DCT coefficients with respect to the past reference frame(s) that will act as reference in the transcoded bit-stream. To summarize our work. ……. (1) (MA)m is the motion activity of a macro block. which covers three parts: (i) Rejudging macro block types under different situations (ii) Motion vector composition and (iii) Recalculation of the residual signal. Transcoding Architecture 92 . Transcoding architecture The block diagram of our transcoding architecture is as depicted in Figure: 1. k is a properly tuned constant and |xm| and |ym| are the Figure 1. Thus. The smoothness and quality of the transcoded video stream is also maintained by using an efficient motion based frame skipping algorithm. re-encoding errors are incurred in performing DCT and re-quantization. The method which is appropriate for the macro block under consideration is employed to make the process computationally cheap yet. thus further minimizing the re-encoding errors. and so transcoding this frame improves the quality and smoothness of the video sequence delivered. So. The rest of the paper is organized as follows. This is achieved using the block translation method [13. 2. Also. frame coding modes. Also. We calculate the new motion vectors using motion vector composition. the conclusion and future scope is given in Section 6. the input bit stream is parsed by a variable length decoder (VLD) which performs the extraction of header information. We implement a scheme. macro block coding modes. we use the modified definition of motion activity. Section 4 briefly summarizes our implementation and Section 5 presents the experimental results obtained. which is briefly described below [11]. Thus we compute the quantized DCT coefficients for non-skipped frames entirely in the DCT domain. Section 3 describes the macro block reencoding scheme. we present our scheme to re-judge the macro block coding types to minimize the drift errors [4].considerable motion. Firstly. In Section 2. achieving good results . When the P frames are dropped. the motion vectors that refer to the dropped frames become invalid and need to be recalculated with respect to their new reference frames that are part of the transcoded bit stream. which accelerates the transcoding process. We re-calculate the new quantized DCT coefficients of the prediction residuals by processing the quantized DCT coefficients available from the incoming stream even in the case of MC macro blocks.

Then. 3. the smoothness and quality of the output video are maintained to a very good extent. 93 . MBt-1 is the skipped macro block and MBt-2 is the new reference area of MBt. are skipped. we select a value N (which we name as the quality factor). else Thr(f)= (Thr(f-1)+ MA (f-1))/2. which updates the DCT domain buffer for the quantized DCT coefficients of the residuals. the final list of frames that are to be skipped from the incoming bit stream is obtained. we take into account of intra macro blocks also in the motion activity computation.The algorithm is presented below: Algorithm for frame skipping Motion-based Policy (frame f): if(f = first frame) Thr(f)=0. After the new motion vectors and prediction residuals are calculated. The motion activity of a frame is calculated by summing up the motion activities of all the macro blocks of that frame. …. Since B frames are non-reference frames. an initial list of frames that can be skipped are available. Q[DCT(est)] is the s s modified error term and similarly Ut and Vt are original vectors of MBt . after performing this step. they are first dropped from the bit stream and no re-encoding is required in this case. they are assigned the maximum motion activity value. Suppose the 7th frame (P frame) is dropped. The dropping of P frames is performed next. This value of N gives the number of frames to skip per GOP (Group Of Pictures). the next step involves application of a motion based. if MA(f)<= Thr(f) skip f else transcode f Thus. After this. which follows the B frame skipper) includes a switch S. IBBPBBPBBPBBPBB. Q[DCT(et)] is the original error term. which corresponds to the search range used by the motion estimation procedure.which is being modified . MBt is the macro block . Macro block Type Conversions and re-encoding: Case 1: Intra macro blocks need no changes. By experiments. the frames contained in the list are sorted by their motion activities. which is done in such a way that the frame rate as well as the bit rate is reduced sufficiently to meet the requirements. the macro block is largely different from the reference area in the previous frame). the output is fed to a variable length coder (VLC). depending on the coding mode of the macro blocks. New prediction errors computation for non-MC macro blocks Description: Rt is the frame being modified . The P frame skipper (portion of figure: 1. using the value N. Re-encoding architecture When P frames are dropped. Consider a GOP (in the display order). that is the N frames with the least motion activities amongst a GOP. Rt-1 is the skipped frame and Rt-2 is the new temporal reference of Rt. Then.motion vector components. frame skipping algorithm as described in [11]. the transcoder performs the motion compensation in the DCT domain. The final list of the frames to be skipped includes both B frames as well as P frames. When the switch opens. the B frames (8th and 9th frames) that appear after it and the subsequent P frame (10th frame) need to be modified. This order is followed to avoid redundant re-encoding of frames. Since an intra macro block is produced when there are many prediction errors (namely. equal to the maximum size of the motion vectors. it is observed that till N=3.. Our re-encoding scheme is presented in the following section. The results are presented and described in section 5. U t and V t are their modified values . (2) Once the motion activities are computed. Figure 2. thus producing the final bit stream. the frames that are part of the transcoded bit stream need re-encoding. In this way. Dropping P frames requires a re-encoding scheme because the motion vectors and prediction residuals of frames that refer to the dropped P frame become invalid. the 2 B frames that precede it (5th and 6th frames). as given by equation (2). as they do not contain references.

if the macro block is (Backward) predicted. since the percentage of these macro block types are quite low. In case of Forward predicted macro blocks. MB2t1. Hence.2. If it exists. MB2t-1. which gives the modified vector: Ust = Ut + Ut-1 .2. the final modified vector can be found by simple vector addition (equations 3. then it requires no changes. the final modified vector can be found by simple vector addition Figure 3. the newly quantized DCT coefficients of prediction error for MBt are given by: Q[DCT(est)] = Q[DCT(et)] + Q[DCT(et-1)] …. The method which is appropriate for the macro block under consideration is employed to make the process computationally cheap yet. among the neighboring ones and whether the referenced area overlaps more than 3/4th in area with it .. then no change is needed. they can be found out by recursively tracing the vectors. Hence. One of the four (MB1t . 15] achieves this. Q[DCT(et-1)] = Q[DCT(MBt-1)] . MB4t-1) is an Intra macro block: If one of the four (MB1t . if the referenced area lies within the macro block boundary. if the macro block is (Forward) predicted..1 and 3.2. Inter coded macro block: If the referenced macro block (MBt-1) is Inter coded (Forward predicted)..2. 3. then MBt is converted into an Intra macro block. For this purpose.Q[DCT(MBt-2)] … (6) Where Q[DCT(MBt-1)] and Q[DCT(MBt-2)] can be found by the block translation method .. And in case the neighbouring macro blocks are not Intra coded. motion vector composition (as covered in 3. then we have the sub-cases 3. then we use the Bilinear interpolation [8] to find the resultant vector.2.The Block Translation method.2. MB3t-1. (figure: 2): 3.2. Case 3: In the frames that appear after the dropped frame. then FDVS [9] is used to find the resultant vector.2. The new prediction error is calculated entirely in the DCT domain [15] unlike some architectures that operate partially in the pixel domain [10]. which follow: Sub-cases 3. MB4t-1) turns out to be an Intra macro block .(3) Vst = Vt + Vt-1 ….2.2.The linear property of DCT is used and since DCT (et) and DCT (et-1) are divisible by quantizer step-size. 3.. MB2t-1. New prediction errors computation for non-MC macro blocks 94 . 4). MB3t-1.2 cover the motion vector composition scheme used. then it is converted into an Intra macro block and is replaced by the referenced picture area . MB4t-1) are Inter Coded . MB3t-1.(5) where. that is available in literature [13.1 and 3. where in both bilinear interpolation vector selection (BIVS) and forward dominant vector selection (FDVS) are used. But if the macro block is (Backward) predicted or (Forward + Backward) predicted. If the referenced area does not lie within the macro block boundary.Case 2: In a B frame that appears before the dropped frame.2) may be required. Intra coded macro block: If the referenced macro block (MBt-1) is Intra coded (which is not the case in figure 2).1.. MB4t-1 are all Inter Coded: If all the involved macro blocks (MB1t. MB2t-1. leading to good results. MB1t . then its quantized DCT coefficients can be calculated from the incoming quantized DCT coefficients of its neighbouring macro blocks (available from the DCT domain buffer).1. MB3t-1.It is found that the increase in bit rate due to this is not significant. But if the referenced area does not lie within the macro block boundary (figure: 3).2.. We implement a scheme. then the vector of the referenced macro block is simply added to the vector of MBt.(4) Where Ust and Vst are the modified components of the motion vector of MBt. 3. we have the following sub-cases.1 and 3. then we find out if there exists an Inter macro block.

to show the comparison between the visual quality of the reconstructed reference frames (for N=3..Thus the output generated by our modules include: 95 . After the new forward vector is found. The result obtained is found to be promising. 3. 4. average PSNR values. Similarly figures 5(a). 4 frames per GOP. The quantized DCT coefficients of the modified frames.. the new quantized DCT coefficients can be found out by the equation given below: Q[DCT(est)]= Q[DCT(et)] + (Q[DCT(et-1)]/2) .mpg and SCHONBEK. 2. The three video sequences. 5(b) and 5(c) are images taken from the original and reconstructed sequences of ADVERTISMENT.. which includes four header files namely. Case 4: In the frames that appear after the dropped frame. by showing the input and output video sequences to a group of 30 people. The results are presented in the next section. motion vector composition. The Mpeg-1 video stream is taken as the input along with the quality factor (N)..h. We have carried out our implementation for Mpeg-1 videos. 4). ADVERTISMENT. The re-judged coding types of the macro blocks We reconstructed the new frames by performing inverse quantization and inverse DCT to compare them with the frames from the original video sequence.Q[DCT(MBt-2)] ..(equations 3. N=4) and the original frame taken from the input video. re-calculation of prediction errors and computation of the number of bits in the modified frames (this helps in evaluating the size of the output video stream).. which is a free tool available on [16]... The input is processed to get the details of video stream including the required header information. We also took observations on the smoothness of the output video. and input and output frame rates for the N=3 and N=4).. Table 1 gives the results for the video sequences (percentage reduction in the stream size. There by... 1.. This is done using mpeg_stat. declarations... then the Forward vector needs to be modified and is done in the same way as discussed above for case: 3. The average PSNR (peak signal to noise ratio) values for N=3.mpg and 4(b). Experimental results We conducted experiments over mpeg-1 videos to evaluate the performance of our transcoding architecture and the results for three sample mpeg-1 video sequences are presented. Implementation In this section.h and Huffman. Figure 4(a) is a frame from the original sequence. the motion vectors and the quantized DCT coefficients of each macro block. we observe that the average PSNR value decreases with increase in the value of the quality factor (N). 4(c) correspond to the reconstructed frames for N = 3 and 4 respectively.mpg respectively. which is decided by the channel capacity and the target frame rate. 5. we give a brief overview of our implementation.h containing functions for motion activity calculation. block_dct. The new values of bits per frame. are calculated and presented below.mpg are compressed mpeg-1 videos.Q[DCT(MBt+1)])/2 . a good tradeoff between bit rate reduction and video quality is achieved. ACT60. which is a direct consequence of increase in the drift errors with the increase in the value of N. 4. frame_modifier. We also present few images below. ACT60. The new motion vectors. if the macro block is (Forward + Backward) predicted. each with a resolution of 160×120 and having 15 frames per GOP with the following sequence: IBBPBBPBBPBBPBB.(8) Q[DCT(et)] = (Q[DCT(MBt)]-Q[DCT(MBt-1)])/2 +(Q[DCT(MBt)]-Q[DCT(MBt+1)])/2 .(9) Q[DCT(et-1)] = Q[DCT(MBt-1)] . From the results obtained.(7) Which can be derived as follows: We have the 3 equations as follows: Q[DCT(est)] = (Q[DCT(MBt)] .Q[DCT(MBt-2)])/2 +(Q[DCT(MBt)] .mpg. Otherwise the macro block is reencoded as an Intra macro block. We have implemented our algorithms in C++.(10) Eq:8 – Eq:9 – (Eq:10)/2 gives us the required equation . From their feedback it is observed that the output video is found to be very good till quality factor (N) =3 (for a GOP of size 15).h.

97 (a) Original frame (b) for N=3 (c) for N=4 Figure 4. which helps in solving the problem of video transmission over networks.17 35.97 26.97 26.Table 1. Sample frame taken from ACT60. change in frame rate for 3 sample mpeg-1 video sequences Input video Input Quality Factor (N =3) Quality Factor (N=4) frame sequence % Averag Output % Averag Output rate reduction in e PSNR Frame reduction e PSNR Frame (fps) stream size (dB) rate in stream rate (fps) size (fps) ACT60.19 23.32 37.29 17.97 ADVERTISME NT.22 34. % reduction in stream size.97 28. One of the areas that still seem to be open is the design of efficient frame skipping policies. Sample frame taken from ADVERTISMENT. while having a low computational complexity.mp g 23.79 35.55 33. which constitute channels of low bandwidth and end terminals with varying frame rate constraints.57 29.31 35.78 21. The 96 . Also.mpg 29. Conclusion & future work We have proposed a temporal transcoding architecture. From the results we have obtained. The Macro block coding rejudgement scheme and the re-calculation of the prediction residuals reduce the cumulative errors. the re-encoding errors are low because the quantized DCT coefficients are directly manipulated upon. Average PSNR.97 36.85 36. this architecture works well in terms of the output video quality for both videos with high motion as well as low motion.53 19. The architecture skips frames (both reference and non reference frames) on the basis of motion and is low in complexity because the transcoding is carried out completely in the DCT domain.82 21.mpg 6.97 35.mpg (a)Original frame (b) for N=3 (c) for N=4 Figure 5.47 23.mpg SCHONBEK.

[13] T. Chau.A. February 1998. 2005. 2005. pp.” IEEE International Conference on Multimedia and Expo. July 2005. pp. John Apostolopoulos. “Digital Video Transcoding”. 2005. Lin.1. on page(s): 1306. [4] Chunrong Zhang.” IEEE Transactions on Consumer Electronics.3. “Temporal transcoding for mobile video communication. “A DCT domain frame-skipping transcoder. ICIP 2005. Hwang. Kumar.” IEEE Transactions on Image Processing. [12] Susie Wee. [11] M. Chan. N. pp 502. Shu and L. W. 1 (1). Patil and R. [6] J. T. vol. Jan. Patil and R. no. 2001 Proceedings.T. pp. “Video transcoding: an overview of various techniques and research issues. Proceedings of the 2004 International Symposium on Circuits and Systems. june-2007. IEEE Transactions on Multimedia. Vol. [14] V.edu/ftp/pub/multimedia/mpeg/stat/ 97 . Lonetti and F. Xin.” International Conference on Image Processing. IEEE Signal Processing Magazine. March 1999.18-29. Wu and C. F. [8] J. no. Lin. Lars Wolf. MobiQuitous 2005. [9] J. 2001. Christopoulos. vol. and M. Lin. ICME 2005. vol. Chi Yuan and Feng Wang. References [1] Ahmad I. Kumar. 88-98. [7] Jens Brandt.” Signal Processing: Image Communication.” IEEE Transaction on Multimedia. ‘Video Transcoding Architectures and Techniques: An Overview”. “Dynamic FrameSkipping in Video Transcoding. Xiaohui Wei.”. Vetro. Volume: 1. vol. Ing. 11.506. Shanableh and M. Shibao Zheng. Fung. C. vol. “Fast algorithms for DCT-domain video transcoding. “Motion Vector Refinement for High-Performance Transcoding”. “An Arbitrary Frame-Skipping Video Transcoder.. August 2002. and H. Youn. pp.18.93. 44. on page(s): 421-424. 11-14 Sept.. pp.2003.20. It would be interesting to investigate this problem in an analytical way by the use of tools like dynamic programming and randomization to design new frame skipping strategies. L. [5] H. “New architecture for dynamic frame skipping transcoder. Sun. “Frame-skipping Transcoding with Motion Change Consideration”. [15] V. M. Bonuccellit. “A novel low-complexity and high-performance frame-skipping transcoder in DCT domain. Volume: 51.W. Issue: 4.literature consists of frame skipping policies which are mainly defined by motion information in an experimental way. Volume:1. P. [3] Chia-Wen Lin.” Proceedings of the 17th International workshop on Network and Operating Systems Support for Digital Audio and Video (NOSSDAV'07). Yu Sun and Ya-Qin Zhang. T. on page(s): 793.2005. W.” The Second Annual International Conference on Mobile and Ubiquitous Systems: Networking and Services. A video analyzing tool for mpeg-1 videos http://bmrc. Proceedings of IEEE. Yuh-Reuy Lee. [10] K. pp 1456-1459. pp. on page(s): I-817-20. 30-40. 2005 [2] A. Martelli. “Hybrid DCT/pixel domain architecture for heterogeneous video transcoding.” IEEE International Conference on Image Processing. Sun and C. Oct. “Multidimensional Transcoding for Adaptive Video Streaming. Nov. Y. Vol.804. July 2005. and W. IEEE Transactions on Consumer Electronics. October – 2002. D.T.berkeley. Sun. C.1312.773-776. Ghanbari. September 2003. Siu. Issue: 5.2. Bo Shen. 886–900. May 2004. Mar. Publication Date: 2001.84-97. Volume:7. [16] mpeg_stat. “Compressed-Domain Video Processing” Hewlett-Packard Laboratories Technical Report HPL-2002-282 to be published in CRC Handbook on Video Databases.

and bad ability on global search etc [4].00 © 2009 IEEE DOI 10.cn Chejxh07@lzu. Lanzhou. China wjz@lzu. 730000.76 . The main idea of the hybrid algorithm is to divide the particles into two. The traditional models of the load forecasting such as the time series model. Introduction With the establishment of the electric power market. in this paper we proposed one hybrid evolutionary algorithm which based on PSO and AFSA methods through crossing over the PSO and AFSA algorithms to train the feedforward neural network. Lanzhou. its usage is flexible and convergence speed is fast. easily trapping into local optimum. 730000. The AFSA has strong ability of avoiding local extremum and achieving global extremum. the best solution should be transmitted back to PSO populations. AFSA and PSO are much similar in their inherent parallel characteristics. In recent years. whereas experiments show that they have their specific advantages when solving different problems. Feedforward neural network is a kind of neural networks.cn Jianzhou Wang Jinzhao Liang School of Mathematics & Statistics. slow convergence in the later stage of the evolution. adaptation.edu. China yuzhang@lzu. easy to trap into local minimum point. In order to improve the accuracy of the forecasting. Lanzhou University. 730000. parameter estimation. This paper proposes a novel approach based on the above mentioned idea. autonomy. Afterward the two subsystems execute the PSO 98 978-0-7695-3521-0/09 $25.cn liangjzh07@lzu. but the PSO algorithm has the disadvantages such as: sensitive to initial values. short-term load forecasting of power system.cn Abstract Electricity demand forecasting is an important index to make power development plan and dispatch the loading of generating units in order to meet system demand. This proposed method has been applied in a real electricity load forecasting. and parallelism in the different applications. This algorithm has been applied in nonlinear function optimization. regression analyses model [1] are too simple to simulate the complex and fast change of the power load [2]. China University.edu. the results show that the proposed approach has a better generalization performance and is also more accurate and effective than the feedforward neural network trained by particle swarm optimization. and then find the largest fitness in the two systems. we apply the feedforward neural network for electricity demand forecasting. Lanzhou School of Mathematics & Statistics. which has a better structure and been widely used [3]. Lanzhou. artificial immune algorithm and Artificial Fish Swarm Algorithm (AFSA) are applied in the function optimization and parameters optimization. Artificial neural networks technology is an effective way to solve the complex non-linear mapping problem. In order to improve the accuracy of the forecasting. What we would like to do is to obtain both their excellent features by synthesizing the two algorithms.1109/ICCET. Lanzhou University. such as slow training rate. in this paper. Tao[5] applied the PSO algorithm in Optimization. Lanzhou 730000. Lanzhou School of Mathematics & Statistics. Inspired by the idea of Artificial Fish Swarm Algorithm. premature convergence. But there are still many drawbacks if we simply use neural networks. China University. The inaccuracy or large error in the forecast not only means that load matching is not optimized but also influences the stability of the power system running. the swarm intelligent methods such as Particle Swarm Optimization (PSO) algorithm. we apply the feedforward neural network for electricity demand forecasting.2009. etc. the first particles execute the AFSA algorithm while the second particles execute the PSO algorithm simultaneously. a high forecast precision is much more important than before. 1. The method called AFSA-PSOparallel-hybrid evolutionary (APPHE) algorithm.2009 International Conference on Computer Engineering and Technology Electricity Demand Forecasting Based on Feedforward Neural Network Training by A Novel Hybrid Evolutionary Algorithm Wenyu Zhang Yuanyuan Wang College of Atmospheric Sciences. parameters selection problems. These algorithms reflect different better properties with their characteristics such as scalability. The whole system only has a largest fitness and a best position. fault tolerance. speed.

m. Every node in each layer is connected to every node in the adjacent layer. one or more hidden layers and an output layer. 2. convergent accuracy and can avoid overfitting in some extent. i =1 k =1 q * O q xi (i = 1. Section 3 describes the research data and experiments. . we adopt [10] to describe the AFSA Algorithm. and θ wi (or θ vj ) are bias terms that represent the threshold of the transfer function f .729. xD ) . every particle represent a set of weights and bias. 2. and the constriction factor k is 0. xk is the input of the kth node in the input layer.05 ). in order to guarantee the convergence of the PSO algorithm.05. The results which are compared with feedforward neural network trained by particle swarm optimization (PSO) algorithm show much more satisfactory performance. 2 . (5) yi = f (∑ (wij f (∑ v jk xk + θvj ) + θwi )). ϕ2 = 2. d = 1. i = 1. and then find the largest fitness in the two systems.algorithm simultaneously. O (1) j =1 k =1 H n ϕ = ϕ1 + ϕ2 . output layer has O output nodes. Standard AFSA Algorithm Artificial Fish Swarm Algorithm was first proposed in 2002 [6]. The learning error E can be calculated by the following formulation [8]: E=∑ O Ek k k 2 where Ek = ∑ ( yi − Ci ) . converges quickly towards the optimal position. vmax ] and vmax is a designated value. vid ∈ [−vmax .3. 2. … . x2 . Supposed that the input layer has n nodes. PSO. when the constriction where yi is the output of the ith node in the output layer. the hidden layer has H hidden nodes. the constriction factor k is defined as follows k= Where ( ϕ1 2 2 − ϕ − ϕ 2 − 4ϕ is used.1 The structure of algorithm and definition Suppose that the searching space is D-dimensional and N fish form the colony. D . and 99 . AFSA and APPHE algorithm. And then we apply the AFSA-PSO-parallel-hybrid evolutionary algorithm to train the feedforward neural network. … .2 The PSO algorithm The PSO Algorithm was first proposed by Kenney and Eberhart in 1995 [9] and could be performed by the following equations: 2. The AF individual state can be expressed with vector: X = ( x1 .1 = 2. the hidden transfer function and the output transfer function are both sigmoid function. The remainder of this paper is organized as follows: Section 2 provides a brief description of Multi-layer feed forward ANN. D) is the variable to be searched for the optimal value. We defined the fitness function of the ith training sample as follows: fitness( xi ) = E ( X i ) (2) When the APPHE algorithm is used in evolving weights of feedforward neural network. Moreover. where hidden and output layers. where k q is the number of total training samples.…. ϕ > 4 factor . Section 4 summarizes and analyzes empirical results and discusses the conclusions and future research issues. k yi − Ci is the error of the actual output and desired output of the ith output unit when the kth training sample is used for training. Usually. Output the best solution and stop if the termination criterion is satisfied. In this paper. wij is the connective weight between nodes in the v jk represents the connective ϕ is set to 4. rand1 and rand 2 are random numbers uniformly distributed within [0. … . The computed output of the ith node in the output layer is defined as follows [7]: vid(t +1 =k⋅[vid()+ϕ ⋅rand ⋅(p −xid)+ϕ ⋅rand2 ⋅(pgd −xid)](3) ) t 1 1 id 2 xid (t + 1) = xid (t ) + vid (t + 1) (4) where i = 1. consists of an input layer. weight between the nodes in the input and hidden layers. Multi-layer feedforward neural network An FNN.3.1] . AF food consistence at present position can be represented by: FC = f ( X ) . 2. Methodology 2.1. In this paper.

3. If n f = 0 . and calculate their 2. j ≤ Visual ) .3. FC j are food consistence of time state X i . 2. x jk . (3) Following behavior Let X i denote the AF states at present. if FCmin / n f < δ FCi . and find j X i ' s visual field. j = X i − X j . X j . Divide the particles into two. select a state X j randomly again and judge whether it satisfies the forward condition. Meaning of symbols in following Xi formula is same with these. it moves a step randomly.3 Select the behavior According to the character of the problem.2 The description of behavior (1) Searching behavior We randomly select a new state X in current state xinextk = xik + Random( Step ) xck − xik Xc − Xi (8) δ is Otherwise executes the behavior of searching food. 2. it goes forward a step in this direction. position of its xck = (∑ x jk ) / n f j =1 (7) If FC c / n f < δ FC i . 5.2. and then initialize AFSA and PSO subsystems respectively. the final value of the bulletin is the optimal value of the problem. and then selects an appropriate behavior. AF executes the behavior of searching food. For example. Step is the maximum step length and crowd factor. Mathematically x jk − xik ⎧ FC j < FCi ⎪xinext = xik + Random(Step) X j − Xi ⎪ ⎨ (6) ⎪ FCi ≤ FC j ⎪xinextk = xik + Random(Step) ⎩ Where k = 1. 4. k element of state vector X vector at the next j . The two subsystems execute PSO algorithm simultaneously. Memorize the best solution as the final solution and stop if the best individual in one of the two X c . Step ] . FCc denotes the food consistence of the center position and n f denotes the centre position number of its fellows in the near fields. and then transmit the best solution back to PSO populations. x inextk = x ik + Random ( Step ) (4)Bulletin Bulletin is used to record the optimal AF's state and the AF food consistence at present position. ⋅⋅⋅. X j fellow. Evaluate the values derived by swarming behavior and following behavior. which means that the fellow problem. Random ( Step ) represents a random number in the range [0. the AF evaluates the environment at present. the distance between the AF individuals can be expressed as: forward a step to the fellow centers. Execute AFSA and PSO simultaneously. the simplest way is trial method. if n f explores the center Mathematically nf ≥ 1. Mathematically d i . 100 . The mathematical expression is as follows: X min has high food consistence and the surrounding is not very crowded. forward a step to the fellow X min . 2. AFSA-PSO-parallel-hybrid evolutionary algorithm The performance of the novel algorithm is described as follows: 1. which means that the fellow center is high and surroundings is not very crowded. D .4. (2) Swarming behavior An AF with the current state seeks the companion's number in its current neighbourhood where satisfy d i . in which FCmin is the minimum value of its fellows in the near fields (d i . Update the bulletin with the better state of the AF’s. AF executes the behavior of searching food. xik and xinext represent the x m in k − x ik (9) X m in − X i Otherwise executes the behavior of searching food. FC i . j < Visual . Find the best solution in the two systems.3. X i and AF’s state X inext respectively. the state of which is the optimal solution of the system. otherwise. If n f = 0 . If it can’t satisfy after try _ number times. and implement the behavior whose result is the minimum.FC is the objective function. If FCi > FC j in the minimum X min . Visual represents the vision distance. The acquiescent behavior is searching food.

In this paper we use an FNN with the structure of 5–5–1 to address the problem.1 Problem description In this paper. using the delays method.50 ] . 2. n} . In figure 3. one is the related index defined as: R Fig.1 with a 2. is given.subsystems satisfies the termination criterion. 101 . xk +1 . xk + ( d −1) ]. n n is to the value 1. Through the comparison analysis. 2. Supposed that every weight in the network was initially set in the range of [− 60 . And the embedding dimension d is 5. the more satisfactory performance is. N . From figure 2.2 Simulation The results are as follows. In the AFSA algorithm parameters setting. xk+1. Application 3. The maximum velocity assumed as 2 and the minimum velocity as -2. xk +1 . Yi is the actual data. it shows that the PSO-FNN may rapidly stagnate. . k = 1. If not. .3 . and T > 1 means multi-step prediction. we try applying feedforward neural network to estimate the unknown function g . Here T = 1 means one-step-ahead prediction. we can know that the predictions based on APPHE-FNN algorithm are closer to the actual data than PSO-FNN algorithm’s.8GHZ Pentium PC. 3. We can use the map f to make the prediction.60 ] . Time series prediction is to deduce the future values of a time series according to its past values. . ∑ where d is the embedding dimension. . xk+(d−1)+T ) (11) xk + ( d −1) +T = g ( xk . and the solution no longer improves anymore. Visual = 1 . 4 . Simulations are performed in matlab7. It’s found from table 1 that the related index and the relative error of the APPHE-FNN algorithm for the predicted 2 consecutive days are obviously much better than PSO-FNN algorithm’s. we represent the data in d -dimensional space by vectors Where data. while the APPHE-FNN can still search solution progressively till the global optimum is found. The maximal iterative step is 500. xk+1+T . and all the bias in the network were set in the range of [−50. In this work. . We collected 28-day hour load series from a state of Australia. a 672-observation series as experiment data analysis and forecasting will be done with it. i = 1. The population size is 45. The flow chart of the APPHE algorithm is shown in figure 1. xk + ( d −1) ) (12) g is an unknown function. go to step 2. it can be seen that the APPHE-FNN algorithm has a more accurate forecasting capacity and considerably better convergence than the PSO-FNN. Yi is the forecasting Y is the average of the time series {Yi . .1 Flow chart of APPHE algorithm 2 = 1− ∑ n [Y i − Y i ] 2 [Y i − Y ] 2 ∑ i =1 n i =1 (13) 3. δ = 1. n is equal to 24. xk+(d−1) ) →(xk+T . The closer R 2 | Yi − Yi | Yi i =1 . The prediction can be described as X k +T = f ( X k ) (10) f :(xk . The other is the relative error defined as: X k = [ xk . we adopt [11] to analyse the problem. Two error metrics are employed to evaluate the prediction performance. The process of training via the APPHEFNN algorithm and PSO-FNN algorithm are showed in figure 3. Multi-layer feedforward ANN is trained with 26-day hour load series and then forecast the next two days. Figure 2 is the predictions of the next two days’ load series. Supposing that a time series {xk }.5 . and T is the prediction step. try _ number = 15 . Step= 1.

et al. Jong-Cheng Wu and Jow-Hua Chen. It should be pointed out that. 2007 Second IEEE Conference on Industrial Electronics and Applications. Time series prediction using LS-SVM with particle swarm optimization. Tat-Ming Lok. pp. Acknowledgment The research was supported by the NSF of Gansu Province in China under Grant (ZS031-A25-010-G). 1995.81 2 R PSO-FNN Relative error 11000 10500 Electricity Load Forecasting(MW) 10000 9500 9000 8500 8000 7500 7000 6500 actual data PSO-FNN APPHE-FNN that the proposed method can be used to many other complex time series forecasting such as financial series and hydrological series forecasting. Haoran Zhang. although the processes are focused on electricity load forecasting. Cao Shuhua.propagation algorithm for feedforward neural network training.82% 0. Fujian Computer. Zhao lei. Qian Jinxin. The Optimization of PID Controller Parameters Based on Artificial Fish Swarm Algorithm. Changjiang Zhang . [7] Chern-Hwa Chen. 2006. Dalian. [10] Yi Luo. [8] Jing-Ru Zhang.92% 5. [4] Shu-xia Yang. Xiaofeng Liao. pp:1941-1948. Jan 2005.3972. pp:1271-1274.64% References [1] Niu Dongxiao. pp: 10261037. we believe 102 . In Press. Available online 7 April 2008. in Proceedings of the IEEE International Conference on Neural Networks (Perth.85 0. Volume 185. pp: 747-752. Jinying Li. feedforward neural network trained by AFSA-PSO-parallel-hybrid evolutionary algorithm is proposed for electricity demand forecasting. Journal of Wind Engineering and Industrial Aerodynamics. 15 February 2007. Corrected Proof.92 0. Conclusions In this paper.1. et al. pp:1637-1645. 0 5 10 15 20 25 hours 30 35 40 45 50 Fig.86 2 R APPHE-FNN Relative error 2. Lyu. Beijing: China Electric Power Press. Advances in Neural Networks. Power Demand Forecast Based on Optimized Neural Networks by Improved Genetic Algorithm. A hybrid particle swarm optimization–back. No. Prediction of flutter derivatives by artificial neural networks. Issue 2.Kennedy and R. China. [9] J. Michael R. Kwok-wo Wong. IEEE Service Center. Juntao Zhang and Xinxin Li. 2001. Daily Load Forecasting Using Support Vector Machine and Case-Based Reasoning. [2] Dongxiao Niu. An optimizing method base on autonomous animates fish-swarm algorithm [J]. Proceedings of the IEEE International Conference on Automation and Logistics August 18 . Applied Mathematics and Computation.3 Fitness curves of FNN based on APPHE algorithm and PSO algorithm respectively 4. pp:1058-1062. Ning Li. et al.Eberhart. [11] Xiaodong Wang. The application of BP neural networks in hospital multifactor time series forecast. Jinchao Li. [6] Li Xiaolei. Vol. Mingquan Ye. 2007. [3] Changrong Wu. Shao Zhijiang. Piscataway. 2007.2 The prediction results based on APPHE-FNN algorithm and PSO-FNN algorithm respectively 200 180 160 140 120 Fitness 100 80 60 40 20 0 APPHE-FNN PSO-FNN 0 50 100 150 200 250 300 Iteration 350 400 450 500 Fig. The results show that the proposed algorithm has a better ability to escape from the local optimum and a better predicting ability than PSO-FNN algorithm.2002. And the high precision has a significant impact on the economic operation of the electric utility since many decisions based on these forecasts have significant economic consequences. 13-16 August 2006. Power Load Forecasting Technology and Its Application.71% 3. Applied Mathematics and Computation.38-39. An improved particle swarm optimization algorithm combined with piecewise linear chaotic map. Proceedings of the Fifth International Conference on Machine Learning and Cybernetics.21. [5] Tao Xiang. Jun Zhang. 15 July 2007. Jinan. 3. Volume 190.Systems Engineering Theory and Practice. pp:28772881. 11:32-38. Issue 2.Table 1 The prediction results about R 2 and relative error for 2 consecutive days Algorithm 27th 28th 0. “Particle swarm optimization”. Australia).

the acceleration of dummy head using NAB is smaller than that using DAB. sled crash test. Consequently. However. To reduce possible side-effect of the airbag system and to have suitable requirement on the inflation system. John et al. a sandwiched tube-type airbag system (NAB) [3]was developed (Zhong. The experimental devise can be used to inflate the airbag and provide gas with prescribed pressure and leakage. Experimental Set-up .. chest and neck are the most vulnerable objects subjecting injuries during airbag interacting with OOP occupant. manufactured and used[1]. Proper development of the airbag system depends on several factors. Two groups of deployment tests using DAB and NAB were performed to support and validate the computational modeling efforts and compare their deployment properties. Results indicated that the NAB airbag need less gas than the DAB to get the same load and displacement curve. 1998. It is believed that the side-effect would be minimized if the early development pattern of the airbag can be controlled. injury-causing localized forces: the punch-out phase and the membrane-loading phase.1109/ICCET. The whole NAB’s volume is about 40 L.410082 luckycitrine@163.Sandwiched airbag. The NAB system and traditional airbag I. a lower layer flat airbag and a middle layer of tube-type airbags. Project N. Because to achieve occupant protection during a crash using a fully-deployed airbag to dissipate the frontal crash forces experienced by the driver over a larger body area and gradually decelerate the occupant’s head and torso to prevent contact with other interior surfaces.J50503 103 978-0-7695-3521-0/09 $25. the middle layer is first and mostly inflated while the other two layers are inflated later and less rapidly. computer simulations of the OOP occupant with NAB and the normal driver-side airbag (DAB) are carried out using HYBRIDIII FE model. Safety valve 3 prevented the system from pressure overloading. side-effects may be serious if the system is not carefully designed. which put high requirements on the systems and thus resulting high costs. fast-acting valves and sensors (shown in figure 1). Solenoid valve 2 functions as a switch between air compressor and tank. In the event of a crash. finally the parametric study was carried out to find the more sensitive and critical design variables to occupant injuries. especially for children and small women. and the leading velocity of NAB is lower than that of DAB.. the airbag itself must deploy rapidly in less than 50 milliseconds. It has been found that chest injury often associates with airbag “punch out” force and occurs at very early stage of the airbag deployment. The performance of the prototype airbag was investigated using static deployment test and sled crash test with Hybrid III dummy. In order to determine the potential for occupant protection. The objective of the new airbag system is to enhance the occupant safety in passenger car collisions with reduction of injury risks for small and OOP occupants. Pressure sensor 9 and 6 are separately used to transfer tank pressure signal and airbag inside pressure signal to computer. According to the findings of biomechanics research. Index Terms . One common side-effect is that it may harm occupants. 1993. but neck injury is more * Supported by Shanghai Leading Academic Discipline Project. finite element model Huang Jing The State Key Laboratory of Advanced Design & Manufacture for Vehicle Body Hunan University Changsha. a tank. II. The outlet of fast-acting valve is connected to the inlet of the new structure airbag. The whole deployment system mainly consists of an air compressor. then the sled test and virtual tests were used to estimate the protection efficiency of the NAB.60 . including the structure of the airbag. In the OOP simulation.2009. China . OOP occupant protection. The air compressor 1 provides compressed air to the tank 4. 200093 Hulin888@sohu. Two phases of airbag deployment have been associated with high. Alex et al.Liu Ping College of Mechanical Engineering University of Shanghai for Science and Technology Shanghai.2009 International Conference on Computer Engineering and Technology Investigation on the Behaviour of New Type Airbag* Hu Lin. METHOD AND MATERIALS A prototype of the NAB was developed. which consists of an upper layer flat airbag. A prototype of the airbag was built up with volume 40 litters. 2005) and evaluated in this paper.An experimental inflation device was designed as shown in Figure 1. A/D board 8 does the conversion from pressure voltage signal to numerical signal.com Abstract -This research a new type airbag system (NAB) was developed in this study which consists of two flat layers and a middle layer with tube-type structure. the inflation system. field automotive accident and laboratory test.com likely induced by the “membrane” loading at relative late deployment time[2]. INTRODUCTION Airbags have been proven to be very helpful in protecting drivers and passengers in many cases of automotive crashes. an occupant positioned extremely close to the airbag module at the time the airbag begins to inflate is exposed to highly localized forces. the ignition system and the ignition time etc. A FE model of the airbag was developed and validated using results from the airbag deployment test and sled crash test. when it deploys improperly on crash (Chritina et al. Development of the traditional airbag would require a powerful inflation system and quick response of the ignition system. and the middle layer tube-type airbags are designed to support the upper layer and the lower layer. 1995). China..00 © 2009 IEEE DOI 10. The upper layer flat airbag is designed to touch the occupant on crash.

Static deployment -The high speed films from static airbag deployment are shown in Figure 3a. (a) test at 50 ms (b) simulation at 50 ms 104 .system are tested with the device in different loading conditions. It means that the NAB system has the potential to provide similar load-displacement feature as the traditional airbag system would do although the NAB system requires much less gas. Then. (b) Simulation of the deployment process of the NAB t=0ms t=10ms Sled test . computer simulation was also conducted with the HYBRID-III dummy FE models.2 mm. the two test positions for OOP situations are as shown in Figure 2. These tests include performance requirements to ensure that airbags developed in the future do not pose an unreasonable risk of serious injury to OOP occupants[4]. Pressure Sensor 7.Simulations of the static and dynamic deployment of the NAB system were carried out by using LSDYNA program to study the feature of inflation process. proposed that static OOP tests should be a mandatory requirement starting in 2003. Material properties of Nylon 66 are assigned to the membrane elements. sled crash tests are carried out by using a 50% HYBRID-III dummy with standard safety belt at impact speed of 35 km/h. (a) (b) Fig. Figure 3b shown the simulation results which are similar with that from deployment tests. In response to side-effects of an airbag in low. Contact interface is defined between the airbag and the dummy’s head.Air Compressor 2. as shown in Figure 4a. The NAB system and traditional airbag system are tested with the device in conditions of static and dynamic deployment: (1) static deployment without impact.To examine the actual behavior of the NAB system in protecting occupants. The airbag is mounted at an angle of 60 degrees with horizontal plane. Two energy absorbing square metal beam of the size 120×120×500 in mm with thickness of 1. FMVSS 208 issued by NHTSA in May 2000. providing a prescribed pressure to the system.Air Tank 5.9.A/D Board.2 (a) out of position 1 (b) out of position 2 Fig. The driver position 1(~ISO 1) is to obtain maximum neck loading (membrane) and the driver position 2 (~ISO 2) is to obtain maximum chest loading (punch out) from the deploying air bag. NUMERICAL MODEL AND EXPERIMENTAL VALIDATION NAB model . Sled crash test . Virtual testing . The features of the airbag deployments were examined. In the FMVSS-208 NPRM. for the 5th percentile female "low risk deployment".The behaviors of the NAB system are investigated using an experimental device as shown in Figure 1.Figure 4a shows the 50th percentile HYBRID III dummy response and contact with NAB at 50 ms in the sled crash test. the 5th percentile adult female dummy OOP virtual tests were used to do parametric analysis. The dynamic stiffness of the airbag system is estimated through dropping composite blocks onto the inflated airbag.1 Layout of the airbag test system: 1. According to above requirement.Fixed Airbag 8.chest and hands. the virtual test results with DAB and NAB were compared.Solenoid Valve 3. Experimental results show that by adjusting the pressure alone equivalent dynamic stiffness properties may be obtained with different types of airbags. Self-contact interface is also defined within the airbag system. The acceleration signal of the dummy’s head and the test sled is measured and recorded.The NAB system is modeled with 18408 Belyschko-Tsay membrane elements. Finally. Airbag deployment test .Safety Valve 4.and moderate-severity crashes. and (2) dynamic deployment with impact. The tested NAB system is inflated with gas storage on the sled. neck. t=0ms t=10ms t=20ms (a) t=30ms t=45ms t=20ms t=30ms t=45ms (b) Fig.Fast Acting Valve 6. The measured acceleration peak value is 16 g at the dummy head center of gravity.3 (a)The airbag during inflation test. III.

DAB reached its peak value (0.7 Volume history plot Time (s) Fig.036 s. The validated NAB module model will be applied in the OOP occupant simulation as discussed in the next paragraph. Table 2 Design Variables and control levels 105 .000 0. normal DAB reached its maximum volume value at time 0.Some of the design parameters that might affect the airbag module performance in drivers' OOP conditions are: airbag structure. their leakage area was set to the same value (1152 mm2). Fig. NAB reached its peak value (0. based on the research results of the first step of the parametric study. (b) Simulation of the interaction of the NAB with the dummy From virtual testing using FE model. mainly to judge the influence of tether and vent hole to NAB. two groups of static deployment simulations were conducted. NAB can use less gas to reach the same airbag inner pressure. The time history plot of head accelerations measured from test is presented and compared in Figure 5. the mass flow rate.040 0. which is comparable with the crash test result. modulus of elasticity. either their inflators’ parameters. etc[5].5 The time history plots of dummy’s head acceleration from sled crash test and simulation of the sled test Comparison of deployment property of NAB and DAB To compare the NAB and normal DAB’s deployment property and assess the relative potential of different airbag designs to cause injury.9 m/s at time 0. fabric material property e.6 m/s at time 0. Table 1 Design Variables and control levels Control level Design Variable 1 2 3 A=Tether collocation Col 1 Col2 Col 3 B=Tether Length 100% 90% 80% C=Vent hole Area 1110mm2 80% 70% D=Vent hole Pos1 Pos2 Pos3 circumferential Position E=Vent hole Radial Pos1 Pos2 Position F=Mass flow rate 100% 70% 4 Absence 70% 110% Pos4 Fig. The collocation of tether.039 s. In the first step. Figure 6 shows the different leading-edge velocity results from the static deployment simulation.037s. the dummy response and contact with the airbag at 50 ms is shown in the Figure 4b.8 Pressure history plot Ⅳ. porous property. Figure 7 shows volume history plot of two types of airbag. and the design variables and their control levels are showed in Table 2.120 Sled test Fig. 200 Pa) at time 0. and is lower than the possible abrasion injury caused by DAB also.035s. also use the same inflator’s parameters. AIRBAG-DESIGN PARAMETRIC STUDY Design variables and control levels . the normal DAB and NAB have the same volume (40 L). density. some initial design parametric about NAB’s structure have been carried out. Reed et al. The first group of simulation is to measure the leading-edge speed between the normal DAB and NAB. the size and location of vent hole are selected as design variables[6]. the influence of inflator characteristics and fabric material properties are discussed.080 0.020 0. From the time history plot of the leading-edge velocity. NAB reached its maximum volume value at time 0. the peak value of the normal DAB is 53. inflator tank pressure characteristics such as the slope and the peak. the length of tether.1445×MPa) at time 0.0025 ms.0225 s. and the NAB is 27. Figure 8 shows the pressure history plot of two types of airbag.4 (a) Sled crash test of the NAB with HIII dummy. and the magnitude and duration of the head acceleration curves are reasonable good. So .Fig. venting during inflation. vent hole. so the possible abrasion injury caused by NAB is lower than ARS 3. cover break out force. Head acceleration (m/s2) Simulation 160 120 80 40 0 0. The parametric study was carried out in two steps.100 0.214M In the second step.060 0.6 Leading-Edge Velocity comparison In the second group of simulation.g. The design variables and their control levels are showed in Table 1. recorded some ARS 3 (Abrasion Rating System) abrasions to human skin at a contact speed of 234 km/h (65m/s). the normal DAB’s volume is 60 L and the NAB’s volume is 40 L.

1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 Chest G 8.8 1250 25.0g 11.0 271. it is formulated as following: N ij = Fz Fcritical + My M critical (5) Simulation matrix . to provide enough representative test data. ANOVA is the statistical analysis of the contribution of each factor or their interactions to the variance of a response [8]. It is formulated as following [7].0 370 17. US NCAP Injury Index— PCOMB and neck injury criteria N ij are chosen as the object functions. Simulations are accomplished in each case of table 3 and table 4 using validated LS-DYNA simulation models.5 800 30.0 191.683555 1. The pareto plots indicate the relative effect of each design variable on occupant injury.264300 0.023451 0. R in ANOVA means the coefficient of determination.5g 20.0g 36g 9.00351 × HIC36 ) (2) (3) (4) OOP2 PCHEST = 1 1 + EXP ( 5.208264 0.0g 3.0 216.8 1300 -32.917027 0.024956 0. a design of experiment (DOE) analysis was conducted to design the test matrix.3g 9.7g 4.695707 0. so the number of experiments will be very big and can’t get the mutually affecting relations of these variables. PH E A D PHEAD = OOP1 PCOMB = PHEAD + PCHEST − ( PHEAD × PCHEST ) and PCHEST are defined as following: 1 1 + EXP ( 5.844696 1.0 302.810461 0.042293 0.0 279. in another word it is a measure of the accuracy of the model fit.4 1300 -29.1 1500 31.0 246.023882 0.7 1050 -28. head. LD-DYNA simulations are not required 4 3 × 34 times but only 27.3 1400 75.0 352.017891 0.After formulation of design problem.0 187. for the first step of parametric study. The main effects graphs indicate the desirable direction for each design variable. field automotive accident and laboratory test.741187 0.0 242. The traditional test work generally changes one design variable’s value at one time and other variables maintain invariable. the choice of design variables should be considered synthetically.399734 0. the orthogonal array is defined as L9 4 3 Analysis of results . Ⅴ.0g 23.0 1061 -2100 81.8 1300 -22. the density and porous property of fabric should be chosen medium or higher level.063818 0.0 246.5g 8.909295 1.0g 7. In order to make NAB provide good protection for occupant under all conditions.0g 18. the minimizing occupant injuries.740663 0. pareto plots and main effects graphs are recorded.0 220.047185 0. chest and neck are the most vulnerable objects subjecting injuries during airbag interacting with OOP occupant.0 482.0 318.020036 0.0 263.0g 28.031829 0.0 217.044563 0. Dummy pos NO.6 1180 95.1 1200 -35.4 1120 -27.859642 0.586598 1.031676 0. And for the second step of parametric study.0g 10.022394 0. In order to valuate NAB’s protection performance synthetically.731283 0.023877 0. The method by using orthogonal array to arrange experiments permits many variables change simultaneously.022805 0.991972 Object function definition .021403 0.4 2250 25. the 5th percentile female dummy seated in three postures: in-position.865515 0.020650 0.5g 27.00693 × CHESTG ) N ij can present both upper neck force and upper neck moment.036827 0.5 243.026487 0.5g 7.0g 11. out-of-position 1 and out-of-position 2.8g 10.020887 0.593016 0.6 1600 28.5g 12. According above orthogonal array experiment matrix.0g 11.0 268.0 223.025381 Nij 0.029033 0.0 650 15.729635 0. RESULTS AND ANALYSIS 106 .7 1500 -31.7 1400 -38.225341 0.019228 0.5g 2.109046 1.m-3) J=fabric material Porous property 1 120 0 541 0 Control level 2 160 4.111706 0. Based on above three scenes parametric analysis.057342 0.042145 0.5 244. the pressure slope of inflator should be chosen lower or medium level.5g 13.5 721 50 3 200 6.883049 0.02 − 0.7 1750 90.020170 0.55 − 0.Design Variable G= inflator Pressure peak (kpa) H=inflator Pressure slope (ms) I=fabric material Density (kg.1 1150 95.0 333.028680 0.870469 0.8 1800 90.1 1500 75.023699 0.0 1131 3600 28. PCOMB can presents both HIC and chest acceleration as only one function. we can draw a conclusion that ‘the pressure slope of inflator’ and ‘the porous property of fabric’ are sensitive to occupant neck injury.021954 0.0 297. it can reduce the experiment numbers greatly and can be possible to estimate the effect of these variables more precisely.483455 0. Effectiveness of NAB .0g 10.According to the findings of biomechanics research.0 195.Adjust the parameters of NAB to the direction of design improvement according to above 2 ( 3 ) matrix. the ANOVA.The virtual tests results for parametric study are shown in Table 3.The Design Exploration Tools of iSIGHT is used to analyze the previously mentioned virtual test results. the orthogonal array is defined as L16 ( 4 × 2 ) matrix.264523 0. (1) Object Function = MIN ( P ) COMB IP Where.0g 7.4 1220 -26.894795 0.0g Table 3 virtual test results My HIC36 Fz(N) (Nm) 182.5 901 100 Parametric virtual tests results .7 -1900 96.5 Pcom 0.0 1450 -30. And the predicted desirable direction for each design variable are as follows: the pressure peak of inflator should be chosen lower level.0 273.6 950 68.0 266.0g 8. ‘the pressure peak of inflator’ is sensitive to occupant head and chest injuries.8g 7.2g 18.

Kutner. W. Applied Linear Statistical Models: Regression. P.. Ⅵ. Nevertheless. At the same time. 5th Percentile Driver Out of Position Computer Simulation. Issue 1 February 2008 . 60 L normal DAB and 40 L NAB are used respectively. [4] Raj Roychoudhury. which is desirable to avoid unexpected injuries to the occupants. Boston. No 2000-01-1006. Mike Blundell. A review of airbag test and analysis. In: Proceedings of the 20th International Technical Conference on the Enhanced Safety of Vehicle. Patent CN200410046609. The recorded injury criteria include: 15ms HIC. Further investigation will be performed subsequently to develop a new airbag product. Analysis of Variance. Proc.com ) 107 . page1-5. neck tension. Sandwiched Tube-Type Airbag. 221.Italy .Hu Lin got doctor degree of mechanical engineering from Hunan University in 2008. Wasserman. CONCLUSION Results from computer simulation and sled crash tests show that the NAB system has good potential to provide effective protection to occupants as the traditional airbag system would do. A driver’s steering aid for an agricultural implement. Peter Ritmeijer. 24(2005). which indicate that the NAB could decrease the risk of OOP occupant’s injury and provide better protection. At least two advantages may be obtained. Z H and He. The purpose is to verify NAB’s protective efficiency to different stature drivers.. the actual performance of the NAB system in commercial product has to be examined with an actual inflation system activated by a matched ignition system.1 0 HIC15 3ms-Clip Chest-D Neck-T Neck-C Nij STAB injury/DAB injury 5th-OOP2 50th-OOP1 50th-OOP2 [3] Zhong.4 0. J. chest deflection. Modeling and Simulation of Sandwiched Tube-Type Airbag and its Optimization using Design of Experiments. the NAB system would require less gas for the airbag to take the same amount of space. Computer and Electronics in Agriculture. Michael Freisinger. (Corresponding author to provide mobile phone: 08615821431148 . International Journal of Crashworthiness. Z Zhong. 1011th. Automobile Engineering. Van Zuydam. Khan.China on September 4. based on an electronic map and real-time kinematics DGPS. Craig Hanson. The normalized values with respect to percentages of injuries caused by DAB are shown in figures 9. M. [2] Jörg Hoffmann.9 NAB Injury comparison with DAB It is seen that all the injury ratios are less than 1. Moatamedi.parametric analysis.6 0. W. U. the authors believe that the concept and prototype of the NAB system is worthwhile to be exploited for improvement of the airbag for occupant protection. Mohamed Hamid .8 0. [5] William Mu. SAE paper. 07-0319-O Hu Lin was born in Hunan province P. and focuses on the research of Automotive Safety and Electronics.5 0.October. paper NO. 3 ms clip. then total 8 simulations were carried out. IMechE Vol.. Driver out-of-position injuries mitigation and advanced restraint features development. the upper layer which would impact the occupant becomes softer. 2005. 9th International MADYMO User's Meeting. and a series of virtual tests were carried out with 5th percentile female dummy and 50th percentile male dummy sitting on position 1 and position 2. Part D: J. p153-163. Soongu Hong. MA. Firstly. However. L Hu. Fig. the system can be exploded more rapidly to take as much space as possible between the occupant and the automobile interiors.2002 . ESV 17th Conference. the inflation system can be made smaller and less expensive. 1990. Volume 13. A Shape and Tether Study of Mid-Mounted Passenger Airbag Cushion Using Design of Experiments.2 0. [7] Seybok Lee. Irwin Publishing. R. 5th-OOP1 1 0. 1978.R. [6] J Huang. Manoj Mahangare. pages 67 – 76. and Experimental Designs. Numerical results also indicate that the NAB system would give less harm to the occupant if the ignition takes place properly. He has published 25 articles.7 0. Investigation Into the Effectiveness of Advanced Driver Airbag Modules Designed for OOP Injury Mitigation. and D Liu.E-mail: hulin888@sohu. M.9 0. Secondly.9. neck compression and Nij.3 0. Dana Sun. Como. [8] Neter. REFERENCES [1] M.

O. programmers are focusing more and more on to developing software on embedded system to make it portable and platform independent. Thus it will help us to design and provide more efficient network for the future network. and thus attempts to be the optimized form of the kernel for a specified application. and the level of peak traffic [3]. Therefore. the PNtMS was developed to make efficent usage of the limited resources. traffic generated per node. It is built to develop applications for a very small target that does not require a keyboard. Email : mostafijur21@hotmail. network monitoring.com Abstract— The principal role of embedded software is the transformation of data and the interaction with the physical world stimulus.00 © 2009 IEEE DOI 10. in terms of the amount of traffic usage as well as connectivity. The Internet has been growing dramatically over the past several years. The system has been designed to capture network packets information from the network and performs some statistical analysis. which lead to frequent changes in network status. I. memory and power consumption. Malaysia. R. it is crucial to monitor the network. concurrency.embedded linux. The main problem of developing embedded software is inadequate software architecture and get better performance in order to reduce protocol processing overhead.2009 International Conference on Computer Engineering and Technology Performance Evaluation of PNtMS: A Portable Network Traffic Monitoring System on Embedded Linux Platform Mostafijur Rahman. B. and the level of peak traffic. INTRODUCTION Rapid growth of hardware technologies brings large variety of smaller hardware architectures and platform orientation that has been leading a large demand of embedded software.37 . P. the Internet is used for increasingly diverse and demanding purposes. Ahmad School of Computer and Communication Engineering. The principal role of embedded software is the transformation of data and the interaction with the physical world [1]. Perlis. Box 77. Universiti Malaysia Perlis. low end processor (133MHz) and storage(less than 1GB). The Technology Systems (TS) [5] provides Single Board Computer (SBC) with TSLinux operating system(OS). Linux is a multi-tasking. operation and performance evaluation of the portable network traffic monitoring system (PNtMS) on an embedded Linux platform. The main concern in developing embedded software for network application is the lack of published best practice software architecture for optimizing performance by means of reducing protocol processing overhead. Embedded softwares are marked with the stamps as: timeliness. So. providing data on the volume and types of traffic transferred within a LAN. number of traffic going through or comming from a system or application which is causing bottleneck. The primary goal for this work is to see how TSLinux could cope with the limitations inherent in a low end embedded platform in producing reliable embeddded traffic monitoring system. These data are then stored into the log files. which includes providing data on the volume and types of traffic transferred within a LAN. With this rapid growth. in order to understand the network behavior and to react appropriately. video. yet raised performance is required. Zahereel Ishwar Abdul Khalib. Keywords. multi-processing operating system and purposely made for the required application and target hardware. d/a Pejabat Pos Besar. This paper presents the implementation . The principal work of network monitoring software includes collecting data from the Internet or intranet and analysis of those data. 01007 Kangar. floppy disks. traffic generated per node. multi-user. implementation and performance evaluation of the PNtMS into 108 978-0-7695-3521-0/09 $25. the design. liveness. number of traffic going through or comming from a system or application which is causing bottleneck.2009. memory usage. According to a survey. memory usage. The common features of a network monitoring software includes. and hard drives. and power consumption.1109/ICCET. reactivity. Because of the resource limitation in terms of processing power. single board computer. commercial Embedded Linux owns approximately 50 percent more share of the new project market than either Microsoft or Wind River Systems [4]. In this paper. Result shows that PNtMS is performing at par with an existing network protocol analyzer with minimal usage of RAM ( 578 KB). and power consumption. and heterogeneity [2]. and the traffic information can be shown through web browser or onboard LCD.

2. The traffic information can be shown through web browser or onboard LCD. It is used for security purposes. This system is used to monitor river or beach. alphanumeric LCD interface . execute image preprocessing (such as color space conversion. the data transmission and data access is done by TCP or UDP connection and web page respectively. Ahmad and W.embedded system will be discussed. It has 16MB of high speed SDRAM. Work by Li and Chiang [6] proposed the implementation of a TCP/IP stack as a self-contained component. interchangeability and consistency of the hardware and try to make it portable and reliable with better performance. The hardware platform setup and the device driver are expected to serve as a development platform for implementing a well designed PNtMS. grayscale modification and image scaling). The objectives of the work were the implementation of a driver for the external ADC and the GPS receiver on a Linux SBC and to demonstrate the use of such a setup in autonomous navigation system. The TS-5400 SBC runs on a AMD Elan520 processor at 133 MHz (approximately 10 times faster than 386EX based products) with dimension 4. which is independent of operating system and hardware. II. Although TSLinux is fairly generic. cost. Glibc (V2. Here CF card in the socket appears as a hard drive to the operating system. FTP server and client. The other features of TS-5400 are it is fanless with temperature range -20° to +70°C and power requirement 5V DC @ 800 mA. A. Telnet server and client. and large scale area such as in agricultural and environmental fields. 109 . The proposed work was to develop portable network monitoring and protocol analysis software into an embedded Linux board. like TS-5000 and TS-7000 serieses etc.1" X 5. They overlook the size. One of the main requirements for this work was to write the code as generically as possible so that it could be ported to other Linux based SBC’s. Among the model TS-5400 was choosen for this research. dual 10/100 Mbps ethernet interface-autosense. R. zero-copy mechanism has been incorporated for reducing protocol-processing overhead. As a general purpose controller. In his research the system was able to capture face image. RELATED WORK data. The navigation system for an autonomous underwater vehicle using TS-7200 was developed by Sonia thakur [7]. M. weight. The system has been designed to capture network packets information from a switch and perform some statistical analysis. Mamat [9] implemented a web-based data acquisition system using 32bit SBC (TS-5500). then store into log files. It can be used to read analog sensor such as for sewer or septic early warning system. memory usage and power consumption. In this mechanism. because of its compatible architecture. it provides a standard set of peripherals [5]. TSLinux is an open source project and compact distribution based on GPL and GPL like licensed applications and was developed from “Linux From Scratch” and Mandrake. The total footprint is less than 18 MB (requires 32 MB or larger CF card).18 and 2. 3 PC/AT standard serial ports with 16 Byte FIFOs support. The version of TSLinux is 3.4".5). For development. and to perform biometric identification based on iris detection. Kernel (V2. matrix keypad interface on DIO2.23).07a. and other basic utilities. The core support is 32-bit instruction set.4. Ahmad Nasir Che Rosli [8] implemented the face reader for biometric identification using SBC (TS5500). TS-5400 SBC Nowadays researchers are focusing their research on small devices. and is unsupported in any other use [11]. The features of TSLinux includes. it is custom made to be used on a Technologic System's Single Board Computer. B. are installed into (CF) and onboard chip respectively. BASH. It shows worst case interrupt latencies of under 7 microseconds [10]. Numerous works have been done in the the embedded system area. In their research the system was able to take analog reading through sensor and convert it into digital The Technology Systems (TS) provides different types of Single Board Computer (SBC). data from a Network card is directly received in the user buffer and the data from the user buffer is directly sent to the network card. compact flash (CF) card interface as IDE0. III. The processor AMD Elan520 was designed with a memory management unit (MMU) that supports Linux and many other operating systems. For adapting TCP/IP stack as a selfcontained component for embedded systems. The PNtMS was developed into TS-Linux. Apache web server with PHP. motion analysis.4. TSLinux providers provide development tools on their web site. TSLinux and DOS being an embedded distribution OSs. 2MB flash disk.

dropbear script is run as a deamon. The TSLinux 3. Operation of the system Our focus is to reduce memory usage and CPU processing.[15] The next layer (kernel layer) contains the functionality for data acquisition and places the data into a special memory region (called kernel buffer) to be read and used by a seperate user-application program (third layer). The functionality of the PNtMS constitutes processing of TCP/IP network traffic with respect to network protocol analysis and traffic monitoring. The communication between the development host and the target hardware is shown in Figure-1. A. Then host 110 . but it is not the only possible setup that should work. SOFTWARE DEVELOPMENT Preliminary setup In TS-5400. To activate FTP and secure copy . The compatibility of the developed application has to resort to chroot during the compilation to ensure with the C libraries. and Secure copy. To start program automatically the network file is needed to be configured as DHCP for eth0 NIC. the pre-allocated memory are used rather than allocating them at run time if buffers are needed at memory managemet. Embedded Linux development host and target system interconnection setup [12] On the host machine. Development environment It is usual practice for most embedded system developments not to support onboard compilation. development. since the kernel version of the target OS (kernel 2. The cross platform compilation is the mechanism used to develop the firmware. To adapt PNtMS components for resource-limited embedded system. a device driver file TSKeypad is created into the /dev/SBC/ location.07a operating system running onboard does not integrates a C compiler in its set of supported utilities. whereby the board requires cross compilation for its application development. The probe itself doesn’t process packet header at all. minicom (a text-based modem control and terminal emulation program for Unix-like operating system) is used to communicate with the target embedded system through serial port. Figure 2 shows the components element of PNtMS. but transfers packets to storage without any packet loss. Here the gathered packet headers were analysed based on the header information at the TCP/IP protocol suit [13]. C. In the analyzer part. Because of this. After a while analyzer analyzes all captured data. that has to do with capturing all incoming packets from the Network. the available hosts are selected and updated their information. This part operates at the network layer and captures data physically through the Network Interface Card ( NIC ).4. Keypad(4x4) and LCD panel(24x2) are used as I/O device. Component involve in the PNtMS Figure 1. This functionality is realized through the usage of libpcap library package [14]. Again there are three ways to transfer file from desktop PC to TS-5400 using FTP.6. B. This particular setup is common in embedded linux The structural breakdown of the PNtMS can be generically segregated into three parts which can be mapped into different layers of the OS. The packet filter extracts each packet header information and stores into data buffer for further analysis.23) does not match the kernel version of the development platform ( kernel 2. ZMODEM. as well as power consumption.IV. The same goes with the target platform for this research. In this system keypad driver module needs to be mounted into the kernel module. Basically. Figure 2.20-16). this part forces the NIC to run in promiscous mode. Apart from that. Here Tskeypad file is used for input funtion. The first part is probe.

Web based individual host statistics into TS-5400 V. and browsable web address that shows total traffic information. Figure 3 shows inputed time interval. Also we can see ethernet status. such as system IP. memory and CPU usage. University Malaysia Perlis. all hosts information is saved into a file for monitoring through a web browser. The browser was developed using HTML and PHP code.99. and the characteristics of the analysis results can be applied to the whole Internet. Some statistical data are shown through 24x2 LCD panel. The maximum 31000 packets can be captured and more than 200 hosts information can be stored in every interval. keypad is used to set input data such as bandwidth(Kbps) and time interval. Within the 111 . when the traffic is peak. Each packet and available host size limits ares set at 94 and 66 bytes respectively. and control the system such as restart and shutdown. memory usage through the web browser. the traffic measurement and the analysis were carried out at Embedded Kluster Lab. In this system. The complete System Setup Figure 5. Figure 4 and 5 show some statistical information of the available traffics and hosts. We tried to use limited memory and display traffic Figure 4. packets are captured then analyzed and displayed. Keypad can control the program such as start and stop . We compared our embedded software with Wireshark because it is renown.information are sorted according to their total capture bytes. After analysis. Some special option are set through the web browser. All of these are performed because of memory limitation. process status. Table 1 shows the experimental platform that we used to evaluate the software. alarm is displayed into LCD. The experimnt assumes that the traffic is typical Internet traffic. The System is controlled by 4x4 matrix keypad.4). Web based traffic statistics into TS-5400 the complete system setup of PNtMS with TS-5400. Figure 3. disk usage. such as options to shutdown and restart the system. The PNtMS program is used to capture packets and store into a buffer to analyze. RESULT AND PERFORMANCE EVALUATION This section presents the results and performance evaluation of PNtMS after being integrated into TSLinux OS and Wireshark (V0. In this. If the original bandwidth is greater than inputed one. Some system information also can be shown through LCD panel.

so it can store more information about a packet. 16. gcc 4. ADNS. Architechture Type Dimention Weight and portability OS Linux Kernel NIC I/O interface Table 1. The duration was rather short. On the other hand.5 in development 0. RAM Usage (%) Avg.information as possible as we can. Different layers protocols statistics. All packet headers of the packets transferred on the network during a 1 h period on September 15. 256MB RAM.23 2. 80GB secondary 14. In the Figure 3b percentage of TCP(65%) traffic dominates as majority of communication takes place through TCP protocol. Avg Packet size Avg. Port Audio <=V18.07a UBUNTU 7.1. in Wireshark. libz 1.175 14.11.95) .1" x 5. Glib 2.4. each packet size limit is 65535 bytes.347 6. c) Application Layer Protocol Figure 6 clearly demonstrate that the PNtMS can capture three types of network traffics and protocols capture state using the percentage. IP(76. Data Rate a) Network Layer Protocol b) Transport Layer Protocol Figure 6.04 2. 112 .23MB already used).661 470.12.378 KBps 2.883 439. Packets/Sec. 16MB RAM (Avg.3. Performance Comparison TS-5400 PC PNtMS Wireshark (0.88%).4. but we did several experiments at different times and the results showed very similar traffic characteristics. Exprimental Platform TS-5400 PC AMD Elan520 at 133 MHz Intel(R) Pentium(R)4 CPU 2.7.2 Sequential Sequential 0. 17 and 23. environment GnuTLS 1. and not portable TS-Linux 3.6. and portable More than 1 Kg.9. libpcap GTK+ 2.11. A small number of ICMP(62 packets) packets are also captured and put into the overall statistics.99. Processor. libpcre 6. 1GB storage secondary storage Embedded SBC Desktop PC 4.66 GHz processor.2.5.958 3.20-16-generic 10/100 Ethernet Realtek RTL8139 Family PCI Fast Ethernet Keypad (4X4) and LCD(24X2) Keyboard and Monitor Table2.150 4.862 KBps Software Name Compiled With Execution Type Avg.4) gcc (2. CPU Usage (%) Avg. Eventhough wireshark is high performance protocol analyzer with more features we want to evaluate the performance of PNtMS with it.4. libpcap 0.10. 2008 were captured from the same network.725 3.675 7.4" Larger than TS-5400 Less than 1 Kg.2.9. Gcrypt 1. At the same time Wireshark captured network and transport layer protocol such as.3.

Proceedings. PC/104 Single Board Computers and Peripherals for Embedded Systems. Lynch. Ed. T. W." Distributed Systems Online. 1993. 1333-1342. Hallinan Embedded Linux Primer: A Practical RealWorld Approach.. Malaysia. Ahmad. SSH(17. London: Academic Press. Kim. Y. vol. B. M. Yun-Chen and C. 22. S. A. R. From the result we can conclude about the evaluation and validation of the PNtMS through the implementation and protocol analysis. 901-905. 11th IEEE International Conference on. From the analysis we can conclude that PNtMS is successfully developed into limited usable memory ( 578 KB). DNS(0. October 2004. Mamat. "A heterogeneous evolutional architecture for embedded software. N. USA: Addison-Wesley. McCanne. "LyraNET: a zero-copy TCP/IP protocol stack for embedded operating systems. "Technologies Systems . 1-6. New York: Prentice Hall PTR. The Fifth International Conference on. Kushida. Petaling Jaya. Proceedings. A. Jing. R.97%). E. 237-242. 2005.ICMP(0. Jacobson. and S." 2008. In this article. Malaysia. K. Penang. Selangor. TCP(59.69%) and UDP(15. C. 5. "Online book on real-time simulation of a satellite link using a standard PC running Linux as operating system. 56 M. pp. C." in Proceeding of the 2nd International Conference on Informatics.74%). 1997. Shakaff. Rizon. Juhari. Kwon. "Technologic System Web Page. 2007. Ahmad. "Survey: Embedded Linux Ahead of the Pack. "An empirical study of the characteristics of Internet traffic. V. "WebTrafMon: Web-based Internet/Intranet network traffic monitoring and analysis system. Lee. pp. Hardware Architecture and Software Development. and J.. processor and storage. Conrad. and W. 2005. TS Product. 1999. M. Information. Breinich. S. M. L. Zelkowitz. 2005. services such as. "Face Reader for Biometric Identification using Single Board Computer and GNU/Linux. pp. vol." in International Conference on Robotics. VI. Rosli. The most challenging future work is to extend PNtMS so that it can be used to monitor and analyze switched network such as fast ethernet and gigabit ethernet and make it to perform better. we compared and analyzed PNtMS with desktop base high performance network analyzer (Wireshark). 1999. Xuejian. Current ed. Mei-Ling. 2002. 2007. A. Mamat. Our work focuses on how to obtain better performance by using limited resources on TS5400 and show more statistical analysis result and make PNtMS user friendly. CIT 2005." Computer Communications.90%) and HTTP(25. 2007. Geer. Leres. W. Y. 2005. D. Internet System Handbook. The challenging future work is to use database into CF(1GB) for monitoring long term traffic history." in Embedded and Real-Time Computing Systems and Applications. 2002 J." in SoutheastCon. and W. Y." in Computer and Information Technology. Rose. C. M. B. and M.26%). Reading. M. Netbios(11. Thakur and J.” Salzburg. CONCLUSION [10] [11] [12] [13] [14] [15] Network performance measurement is an important aspect of network management. M. M. Technologic System Inc. Vision. "Data Acquisition System Using 32 Bit Single Board Computer: [2] [3] [4] [5] [6] [7] [8] [9] 113 . IEEE." Computer Communications. "Tcpdump. Hong. pp. A. This is only the glimps of what PNtMS presents while performing the protocol analysis." in Manual Page. REFERENCES [1] L. D. 1607-1618.96%). "An embedded Linux based navigation system for an autonomous underwater vehicle. C. vol. 22. 123-128. pp. "Embedded Software " in Advances in Computers. vol. 2007. and Signal Processing. Minghui. R. and M. S. MA.83%). 2007. T. pp. Austria. A.

Product families design usually is based on platform. so that customers can get customized products in tolerable time. In order to achieve this goal. Hebei University of Technology. Platform 1 Platform 2 Platform 3 Component n Platform m Constraints User requirements Product family Component 1 Component 2 Component 3 Component 4 Product 1 Product 2 Product 3 Product n 1.20 . knowledge-based configuration methods are employed. which selects assembly of components from a set of pre-defined components to meet customer’s requirements[5]. design methods is to distinguish between module based and scale based platform[4].2009 International Conference on Computer Engineering and Technology PB-GPCT A Platform-Based configuration tool Yan huiqiang. Another way of categorizing platform Figure 1. Product families design is the most important method to implement Mass Customization.2009. rules on how these can be 114 978-0-7695-3521-0/09 $25. Lu fei Institute of Design for Innovation. Hebei University of Technology. Relationship between platform and product family Mittal & Frayman defines configuration as a form of design. configuration is choosing components from a set of pre-defined components to construct the products. then different products are derived by selecting different components which obey the constraints among platforms and components. There are several methods for designing a platform. firstly. The configuration model is defined as set of pre-designed components. Corresponding to these definitions. Meyer and Lehnerd define a platform as a “set of common components. The PB-GPCT is designed to be used by sales engineers for the configuration of complex products. Shi kangyun. Both definitions emphasize product is composed of components and relationship among components. For brevity.So that the platform is the components and the constraints among them from which different products can be derived. the PB-GPCT can be used in different companies without doing any modification. modules.edu. companies have to support Mass Customization to keep low cost and meet individual requirements at the same time. the Platform-based generic product configuration tool(PB-GPCT) is presented which is developed by Institute of Design for Innovation. Configuration knowledge and configuration constraints are discussed in this paper.cn Abstract In this paper. Tan runhua. give two types of platform design methods: (1) top-down and (2) bottom-up [3]. Simpson et al.hebut. Configuration is described as selecting objects and their relations from a set of objects and a set of relations according to customer’s requirements [6] . Configuration model is the product model which is scoped within the conceptualization of configuration domain.00 © 2009 IEEE DOI 10. The configuration model. China E-mail: yanhuiqiang@ scse. Comparing with other configurators. The relationship between product families and platform is showed in Figure 1. platform is introduced into PBGPCT. TianJin. In terms of customer’s requirements. a product configuration is defined as a tool which supports the product configuration process so that all the design and configuration rules which are expressed in a product configuration model are guaranteed to be satisfied [7]. or parts from which a stream of derivative products can be efficiently created and launched” [1]. platform is determined. Muffato defines platform similarly as: “a relatively large set of product components that are physically connected as a stable sub-assembly and are common to different final models” [2]. Being a structure-based and domain independent system. the design approach of platform is not discussed in this paper.1109/ICCET. Introduction With the competitiveness become more and more violent in industries.

…Pm}. the concept of platform-based configuration is introduced. the technique relevant to configuration is discussed. This paper is organized into 4 sections. CMi is the configuration model which the platform has been designated in term of customer’s requirements. Lastly. the platform elements are MotherBoard and CPU. CCL (Configuration Constraint Language) is based on OCL which is used to describe the constraints of the configuration model.2. The other is physical component which typically has a bill-of-material associated with it. it is different as stated above.g. Corresponding to this. CPU. e. Virtual component and physical components all have component instance associated with it. {C1. (2) A set of relations between the domain components. It is also widely accepted to describe product model. configuration is the process to instantiate configuration model. 2. the implementation of PB-GPCT and future works is discussed. the maintenance and analysis of constraints is presented. Sales engineer first selects platform (determine the MotherBoard and CPU) according to customers requirements. One is virtual component which does not have a physical correspondence. global and incrementally constraints [9]. we call it Component Figure 2. There are two kinds of component types.…Rs}). 2. IDE-Unit and SCSI-Unit are two kinds of HD-Unit. Part of configuration model of configurable pc Relations.…Rn}). OCL (Object Constraint Language) is a formal language used to describe expressions on UML models. The taxonomies of constraints are unary. VideoCard and MotherBoard.g. binary. ComponentB Is-a componentA means that componentB is a kind of componentA.g. The knowledge taxonomies of Configuration Being knowledge extensive system. VideoCard and HD-Unit are parts of MotherBoard. the properties of physical component are given by product experts before configuration. Part of configuration model of configurable PC expressed by UML and CCL is showed in Figure 2. configuration defines as: Solution space= f(CMi. When all the properties of component are given values. instance. Rules not only include the constraints among component types. The configuration model is composed of components. e. (3) Control knowledge about the configuration process. However. Configuration model definition PB-GPCT defines the configuration model (CM) as: CM = ({P1.1.…Cn}. e. Component type. Platform. relations among components and rules represented by CCL.g.{R1. Component instance. ComponentA part-of componentB means that componentB is part of componentA. but also include the constraints among component instances. platform-based configuration is introduced into PB-GPCT. the properties of virtual component are assigned by sales engineers at the configuration phase. Platform consists of common elements from which a range of products can be derived. In section 2. Pi is platform which is composed of component instances and component types. For a configurable PC. so that PB-GPCT selects UML to describe configuration model. The taxonomies of relations are Part-of and Is-a. e. To this end. Configuration tasks are summarized to have the following characteristics [10]: (1) A set of components in the application domain. Configuration model representation UML (Unified Modeling Language) is the leading industrial object-oriented modeling language for software engineering. Namely. In section 1. 2. Ci is the element of the product model which is called component type.3.combined into valid product variants and rules on how to achieve the desired functions for a customer [8]. Configuration model 2. 115 . knowledge representation is critical to configuration tool. In section 3. then based on the platform other components are chosen. Rules. Ri is constraints among platforms and component types.{R1. HD-Unit and CPU. But from viewpoint of product family design.

which describes product architecture utilizing UML. each which can have a distinct type. However. all the knowledge is managed by product experts except for control knowledge . Component Instance is component type which is instantiated by product experts. component instance is mass-produced component. Maintenance and valuation of constraints 3. Maintenance and valuation of constraints is show by Figure 5.port = CPU. It describes the rules among components types utilizing CCL. Control knowledge. The main tables which stores Configuration knowledge are showed in Figure 4.port Comparing with OCL. The let expression allows one define a variable that can be used in the constraint. writing constraint expressions in OCL is an error prone task. constraint knowledge and control knowledge. Component knowledge.1.port = HD-Unit. Constraint1:MotherBoard. Figure 6. It is the element of product. Figure 3. The Configuration knowledge of PB-GPCT is showed in Figure 3. CCL constraint edit windows Figure 4. Constraint knowledge. CCL is sub collection of OCL which can be written easily using the edit windows showed in Figure 6.3. It is used to control the process of configuration. Tuples. in of to of 116 . Platform knowledge. Edit constraints The Object Constraint Language (OCL) is an expression language based on first-order logic which enables the definition of constraints on object-oriented models. It is the sub collection of component instances from which individual products can be derived. Let Expressions. different control knowledge will be required. Configuration knowledge Corresponding to this viewpoint. In terms of configuration policy. In general. as stated above. Component Instance knowledge. the following grammars OCL[11] are not supported by CCL. PB-GPCT is different from this view. However. According to Figure 3. A tuple consists of named parts. the configuration knowledge is categorized into component knowledge. Configuration constraints How to write and evaluate constraints is core of PBGPCT. Main tables storing configuration knowledge base The following are two constraint examples Figure 2.port Constraint2:MotherBoard. Figure 5.

(1) Lexical analysis. NY. so that the identifiers are component type names. the detail of CCL is not discussed in this paper. 6. A future extension might support constraint consistency check. PB-GPCT has to ensure the form of the constraints is correct through Lexical analysis and Syntactic analysis. though users can do it flexible using edit window showed in Figure 7. (3) Exists pattern. In order to realize this function. PB-GPCT employed the LL (1) [13] approach to do the analysis because Left recursive and Backtracking do not exist in the grammars of CCL. 3. 7.. It distinguishes between the identifiers and the keywords of CCL. (4) ForAll pattern. (2) Syntactic analysis. In order to support extensibility and upgradeability. Implementation The configurator PB-GPCT employs the popular language C#. there will be numerous constraints for complicated products in configuration model. 3. component instance names and the properties of the component types. The result of this process is a sequence of the tokens which will be the input of the Syntactic analysis. 2000. Constraint patterns Writing constraint expressions is a time-consuming task. The following patterns are supported now.3. A. The system of PB-GPCT 5. which is more efficient than java. 3. 4. H. The taxonomies of constraint patterns are Atomic Constraint Patterns and Composite Constraint Patterns[12]. contradictory constraints may be defined inadvertently. LTD. to ensure to be consistent with constraints.Messages. Let Expressions is not supported by CCL. Consequently. Using this software. It is used to specify the communication between classes. Constraints analysis and constraints evaluation are main plug-in component of the system. For brevity. since plug-in architectures can quickly customize applications by mixing and matching the plug-ins. (2) UniqueAttributeValue pattern. or write new plug-ins for missing functions. the average designing time has been declined from more than one month to about one week. & Lehnerd. 2 MACHINE TOOL CO. PB-GPCT constraint patterns provides a repository of constraint patterns that encapsulate the expertise needed to easily and quickly add constraints to a model. Constraint patterns are predefined constraint expressions that can be instantiated and subsequently configured for developing constraints in a concise way. Another task for the future will be support 3-D configurations which make WYSIWYG(What You See Is What You Get) feasible in the system. (1) AttributeValueRestriction pattern. and uses SQL server 2000 as the Database. However. which selected by user. It has been applied to TIANJIN NO. Plug-in architectures are employed.2.4. ISBN 0648-82580-5. The Free Press. Acknowledgment It is a project supported by Natural science in Tianjin City (07JCZDJC08900) and National Natural Science Foundation of China (50675059). M. constraints are transformed into first logic sentence [14]at the first step. References [1] Meyer. PB-GPCT has to evaluate the configuration items. Conclusions and Future works In this paper. The power of product platforms. Analysis of constraints After finish inputting the constraints. To address these issues. The main GUI is showed in Figure 7. then the first logic sentence is transformed into specific language to execute to evaluate the constrains. Figure 7. P. Valuation of constrains At configuration phase. The task of syntactic analysis is to determine whether constraint expressions are correct input by user. the theory of PB-GPCT is presented. New York. (5) IfThenElse pattern. 117 .

Generating product configuration knowledg Internationa bases from precise domain extended UML models.Kuehn. M. [13] Chen ying.1570. July 2000 93-98. Journal of Production Economics. T. Product platform design and customization: Status and promise. 1999. [11] OMG. “Knowledge-Based Configurati on-Survey and Future Directions”. [12] M. Brussels. F. Lecture Notes in Artificial Intelligence No. In the 11th IJCAI. 60-61. Alvaro Cabrerizo. Wuerzburg.W. 2006. Introducing a platform strategy in product development. [6] A. [5] Mittal. ModelDriven Constraint Engineering.GuenterandC. 145-153. No. Chicago. “Co-operative and Distributed Configuration”. Hayhurst (ed. Special issue: Platform product development for mass customization. OLDHAM. [4] Simpson.. 2000. Brucker. and D. vol.USA. Jannach. pp 107-126.Beijing. pages284-293. J. (1989). 118 . [14] A. Germany.) SpringerVerlag. 13. John McKenna. F. D. 1999. Vol 10. 1. CA. Wuerzburg. Lennart Ohlsson. J.SpringerVerlag. San Mateo. 2001. pp. G. Morgan Kaufman. Koehler. & Frayman. [10] A. International . Felfernig. In XPS-99 Proceedings.GuenterandC.. “Knowledge-Based Configurati on-Survey and Future Directions”. Object Constraint Language Specification 2006.[2] Muffato. Wahler. 1999. Lecture Notes in Artificial Intelligence No. Maier.July20-21. & Mistree. pages 13951401. AIESAM.Principle of Compilation. Research in Engineering Design.In Proceedings of the 8th International Symposium on System Configuration Management (SCM-8). 1.R. Electronic Communications of the EASST. 2-22.SpringerVerlag.1570. and A. Jan 2004. S.Kuehn. LNCS-1439. [7] Görel Hedin. Springer-Verlag. 2007. [8] K. Product platform design: method and application. pp. 5. D. 1998. [9] Ander Altuna. Friedrich. “Modelling Knowledge used in the Design of Hosiery Machines”. Towards a generic model of configuration tasks. NOD 2004 September 27-30 2004 Erfurt. T. [3] Simpson. No. In Proceedings of the Conference on Software Engineering and Knowledge Engineering (SEKE’2000). Proceedings 33 International MATADOR Conference. In XPS-99 Proceedings.

We evaluate the feasibility of aggressive speculative execution model on 8 applications from SPEC2K. 100080. We analyze the characteristic of control dependences and data dependences between adjacent hyperblocks. A hyperblock [2][3] is a set of predicated basic blocks combined by compiler in which control flow only enters from the top. Jun Zhang Department of Computer Science and Technology Key Laboratory of Computer System and Architecture University of Science and Technology of China Chinese Academy of Sciences Hefei. HYPERBLOCK-BASED AGGRESSIVE EXECUTION MODEL Current researches of ILP processors focus on exposing more inherent parallelism in an application to obtain higher performance. use block-atomic(tasks in Multiscalar [4]) execution. To effectively exploit ILP in programs. and analyze the distributions of data dependences between hyperblocks and their impacts on the depth of prediction. It’s obvious that the feasibility of aggressive execution model depends largely on the effectivity and prediction accuracy of speculation. accompanied high communication overheads and roll-back penalties can not be neglected. However.00 © 2009 IEEE DOI 10. the 978-0-7695-3521-0/09 $25. INTRODUCTION Modern CMOS technology brings the increasing number of transistors on one chip. Our experiments show most applications can get high prediction accuracy on control-flow from hyperblock-based prediction mechanisms. high communication overheads and etc. speculative execution.edu. and estimate the expected prediction depth. prediction. to expose potential of instruction-level parallelism (ILP). the number of mis-speculation increases with growing numbers of inflight instructions/ blocks. in which each block is fetched.30 119 . At the aspect of control-flow.ustc. which may achieve high ILP and high resource utilization. We concentrate on finding a tradeoff and maximum potential of aggressive execution that can be exploited. Section II describes the aggressive execution model in detail. Furthermore. so how to effectively utilize the growing resources and exploit more parallelism to accelerate applications is an urgent problem for computer architects. With hyperblock. control-flow and data-flow constraints inherent in a program must be overcome. Keywords-hyperblock. and analyze dependent behaviors of applications under this model. Canming Zhao. Speculative execution [1] which executes programs aggressively has become a mainstream technique to reduce the impacts of dependences in high performance microprocessors. and analyzes it in the aspects of both control dependences and data dependences. we analyze factors which impact expected prediction depth and find depth depends more on application than predictors. and propose a quantitative analysis method to detect data dependences on hyperblock-based execution model. we analyze factors which impact expected prediction depth. In the view of data-flow. executed. with all but one executing speculatively. China mcong@mail. 230027. Yongqing Ren. and then evaluate the feasibility of aggressive speculative execution model on 8 applications from SPEC2K.edu. block-based execution model utilizes additional resources to execute more blocks simultaneously on the processor substrate.2009 International Conference on Computer Engineering and Technology A feasibility study on hyperblock-based aggressive speculative execution model Ming Cong. Recent works on computer architectures.ustc. data dependence I. This paper focuses on analyzing the feasibility of aggressive speculation execution model and finding an appropriate degree of “aggressiveness” under hyperblockbased execution model. This paper focuses on the analysis of hyperblock-based aggressive execution model. Our experiments show that most applications have good predictability. Section III experimentally evaluates and analyzes the feasibility of aggressive execution. However. such as TRIPS and Multiscalar.cn. especially SPECFP applications. we evaluate performance of three branch predictors. which corresponds to the part in section II. Furthermore. as speculation in high-ILP processors become more aggressive.cn Abstract—Speculation execution model which executes sequential programs in parallel through speculation is an effective technique for making better use of growing on-chip resources and exploiting more instruction-level parallelism of applications. larger instruction window and more ILP can be achieved than basic block. efficiency is still limited because of misspeculation penalties.edu. Although many mechanisms have been proposed for speculative execution. but may exit from one or more locations. Similar to the multi-issue in superscalar. and high prediction accuracy on control-flow can be gained by using hyperblock-based branch prediction mechanisms. and then analyze the effecting factors of aggressive speculation with control-flow. Finally in section V we make our conclusions. Hong An. han@ustc. However. {renyq.1109/ICCET. and committed atomically. behave like a conventional processor with sequential semantics at the block level. The block-based execution model [4] has been proposed to enlarge the instruction window. and expected prediction depth differs depending on the characteristics of each application. II.cn.2009. junzh}@mail. and find that it depends more on applications themselves than predictors. zcm. The rest of this paper is organized as follows. Section IV introduces related works on aggressive execution models. we propose a quantitative analysis of data dependence on hyperblock-based execution model. control dependence. especially SPECFP. China Beijing.

processor trains counters to prefer the correct prediction whenever the local and global predictions differ. one for updating the prediction information. Global predictor: Global predictor indexes the table of bimodal counters with the recent global history integrated with branch instruction addresses to get branch behavior. the accuracy of prediction. So we only use BTB for the prediction. branch behaviors in History Register Table (HRT) are also replaced by recent exit history of blocks. prediction accuracy of three branch predictors . in which BTB handles prediction of branch and call exit. A. The other is a PHT indexed by the generated value from the branch history in BHT. One is a local BHT indexed by the low-order bits of each branch instruction’s address. Conventional methods usually use BTB and RAS. and almost as fast as global predictor. RAS predicts address of return exits with known types of each exit. for 120 As shown in Figure 2. but in hyperblockbased model. each branch has only one exit so that one bit of exit information is enough for representing taken or not taken (T/NT). The choice prediction is made from the table of 2-bit prediction counters indexed by path history. and the type of branch instruction can be easily got from a pre-decoder in RISC architectures. 2-level hyperblock-based branch predictors a) Exit predictor: Corresponding to branch behaviors of conventional branch predictors. In addition. we replace the T/NT stored in Pattern History Table (PHT) with exit numbers of each block. which is different from global predictor. b) Block target address prediction Block branch target address determined by the exit instruction is the starting block address of next task.dependences between instructions need to be detected and avoided to prevent pipeline stalls. Tournament predictor: Since different branches may be predicted better with either local or global techniques. Speculation execution on the control-flow 1) Differences between hyperblock branch predictor and conventional predictor Branch predictions are made based on branch history information. Based on conventional branch predictors and the characteristics of hyperblocks. in superscalar model. Local prediction is slower than global prediction because it requires two subsequent table lookups for each prediction. 3) Evaluation of three branch predictors In this section we evaluate prediction accuracy of exit predictors with block target address predictor presented previously on 8 benchmarks from SPEC2000. indexed by the block address and exit ID which was predicted by branch predictor. In hyperblock-based model. So we need a mechanism to predict the type of exit points. but their mechanisms are much different with those in conventional superscalar processors from following aspects. First. the first level predicts the exit point. we use two HRT. Figure 2. among which control dependence and data dependence are of the most importance. Thirdly. so that the accuracy of predictors depends on the frequency of historical representation of previous branch target address. the other for recovering. and then the second level produces the branch target address. the variable block sizes and different target addresses which exit may correspond to force us to predict target addresses of all exit point. which is easy and versatile. 2) Design space of hyperblock-based branch predictors In this section we describe the design space of hyperblock-based branch predictors in detail. it is nearly as accurate as local predictor. After the exit is predicted. we consider a two-level predictor [5] which predicts the first branch that will be taken in a hyperblock. while in block-based model. we use both predicted exit and branch address to determine the target. A return address stack (RAS) used for return address prediction makes it more accurate. dedicated adder in the fetch mechanism and branch target buffer (BTB) of RISC architectures can be used to compute PC relative target addresses before they are computed by ALU(s) in execution stage. some bits of exit ID or target address of the branch should be reserved as the exit information. tournament predictor uses a choice predictor to dynamically select the best method between two prediction for each branch. Secondly. the predictors for exit [7] can be organized around following methods. Structure of the BTB Figure 1. to obtain the Block Target Address. Branch predictions in hyperblock-based model have high parallelism. but the type of exit points in blocks is hard to get before block committing. predicting the exit taken out of multiple possible exits is a multi-way branching problem. each item of BTB maintains target addresses of several exit and hysteresis bits. Both Global predictor and Local predictor are configured to be 16384 96% 94% 92% 90% 88% 86% 84% 82% 80% 78% 76% ammp art bzip equake gzip mcf vortex parser Prediction Accuracy Global Local Tournament Figure 3. Local predictor: Local branch predictor keeps two tables. each block has several exit points. but it may be favorable for deep prediction of a certain local branch. it records T/NT history of N most recent executions of each branch. this will certainly degrade prediction accuracy and increase its complexity. As shown in Figure 1.

Pi denotes the proportion of instructions which have dependence with instructions in the previous ith block to the total number of instructions within current block. The larger dependence depth means the weaker dependent strength. Tsim_proc can generate trace files containing all events of the simulating process. our research is not limited to this architecture but aims at all current hyperblock-based execution models. and we analyze impacts of expected prediction depth on different predictors and configurations. III. which indicates that it is more powerful for deeper speculative execution. finally. mcf. then we depict the depth distribution and predictability for all applications. Although our experiments are based on TRIPS. we evaluate prediction accuracy with different prediction depths. The tournament predictor takes full advantage of global and local histories. TRIPS is a block-atomic execution model. including 3 float point benchmarks: art. In this section. The Speculative execution on control dependence between hyperblocks Large instruction window is built to issue more independent instructions per cycle. because different impacts of instructions on separating producer and consumer. The dependent relation between instructions in traditional processors can be described by dependence-distance (the number of instructions between data producers and consumers). The BTB has 1024 entries. the size of instruction window can reach up to 1024 by speculation. but it is no longer applicable for measuring hyperblock-based model. so that we can exploit more parallelism from inter-blocks. Dependence depeth = ∑ Pi × i i =1 ∞ (1) We define the term “Dependence depth” to measure the degree of dependences between blocks. which means programs could benefit from speculating more blocks according to larger dependence depth. which can help us adopt appropriate prediction depth while studying aggressive execution model. From Figure 3. 121 . 1) Evaluate predictors with certain prediction depth We evaluate prediction accuracy of global and local predictors with certain prediction depths (range from 1~7). Global and local predictor both have strengths on different types of applications with small depths. Methodology Our experiments are performed on the TRIPS toolchain which supports hyperblock-based multi-level speculation. Dependence depth describes the dependence strength between blocks. so we evaluate the feasibility of predicting more aggressively and analyze the appropriate depth of prediction. bzip2. We use 8 whole benchmarks written in C from SPEC2000 benchmark suite. Tournament predictor contains 8192 entries for both global and local predictor. parser.Branch Predictor with Global-Predictor 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% 1 2 3 4 5 Prediction Depth 6 7 f 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% 1 f Branch Predictor with Local-Predictor 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% 1 f Branch Predictor with Tournament-Predictor ammp art bzip equake gzip mcf vortex parser 2 3 4 Prediction Depth 5 6 7 Prediction Accuracy Prediction Accuracy 2 3 4 5 Prediction Depth 6 7 (a) (b) Figure 4. but local predictor performs better with the increase of depth. B. we conduct a quantitative analysis of data dependence on hyperblock-based execution model. achieves better performance than both of them. and most applications still keep a high prediction accuracy even the prediction depth is up to 7. and instructions in the same block can execute in parallel. And it is proportional to the potential depth of speculated execution. and 5 integer benchmarks: gzip. Tournament predictor performs best as it can adapt to pattern of applications with both previous predictors. Prediction accuracy of different branch predictor Prediction Accuracy (c) entries. we analyze the expected prediction depth of different applications based on instructions distribution. There i denotes the distance from current block. vortex. It contains compiler (Scale [3]). but equake and vortex on which local histories have more impact perform better with the local predictor. EXPERIMENTAL EVALUATION A. B. it is clear that global predictor performs better than local predictor on art. bzip. gzip and mcf in which global histories is predominant. Speculative execution on the Data-flow Data dependence is an important factor that influences effectivity of multi-level speculative execution. the prediction accuracy decreases while prediction depth increases. equake. functional simulator tsim_arch and cycle-accurate simulator tsim_proc [6]. First. we must execute instruction streams correctly. but the effective size of instruction window is limited by the depth of control-flow speculation. This demonstrates deeper prediction is feasible to most applications. To be able to obtain benefits from data speculation. ammp. (2) No complexity of hardware implementation is taken into accounts. These assumptions would not affect the analysis of the natural characteristic of programs. and analyze its impact on the depth of prediction. as in figure 4. but the depressive gradient slows down with deeper depths. so we make two assumptions before the analysis: (1) A perfect branch predictor is assumed.

we should consider more improvements on the predictors in order to better adapt to patterns of applications. 14 Figure 7 describes the expected prediction depth with tournament predictor which is configured with 16x the size of old PHT. art and ammp have high proportions with deeper prediction depth. We predict each block with unbounded depths until the prediction cannot continue. Expected prediction depth Expected Prediction Depth 16 14 12 10 8 6 4 2 0 ammp art bzip equake gzip mcf vortex parser Old New Figure 7. we can see values of ammp. low prediction accuracy is not made by the conflict on PHT. we can still find that the number of dependent instructions is considerable in hyperblock. Most data dependence of a hyperblock comes from its previous adjacent blocks. Proportion of blocks VS. but cannot completely substitute for the data dependence behaviors. From the results. In most applications. 6~10. so we further introduce the Expected Prediction Depth. art. 2~5. we can see that the types of predictors have big impact on the expected prediction depth. Although these values are closely related to compiles. so we further analyze the distribution of data dependence (Figure 9).Instructions distribution in blocks 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% 40 35 30 25 20 15 10 5 0 mcf gzip vortex bzip2 parser art ammp equake rd_reg insts load insts normal insts Proportions of blocks >10 6~10 2~5 0~1 ammp art bzip equake gzip mcf vortex parser Figure 5. but comes from the characteristic of applications. as we cannot make subsequent blocks executed even increasing the speculation depth because of the serious dependence between adjacent blocks. our experiments show 20% or even 40% [parse] of data dependence locates in two adjacent blocks. Results may be inequable while a block has data dependences with blocks located in different place. Figure 6 shows the expected prediction depth evaluated under the condition that size of local and global PHT is configured to 16348. Expected prediction depth with different configured predictor Figure 9. and an average of 40% or even 60% [parse] locates in six contiguous blocks. Distribution of data dependence in applications 122 . load instructions and register-read instructions within hyperblocks. from approximately 18% in gzip to 52% in ammp. we can see configurations of predictors have less impact on the expected prediction depth. We would not achieve the anticipated speedup if we only increase speculative depths without considering the impact of data dependence. which also shows that these applications can accommodate well to such prediction model. The unbalanced distribution on data dependences is not what we expect. prediction depth Figure 8. applications has distinctive results. Vortex. Figure 5 illustrates the proportion of blocks that predicted with different prediction depths (0~1. According to a statistical analysis of different prediction depths among various sequences. 3) Expected Prediction Depth Prediction depth distribution can not reflect actual quality of prediction depth for applications. we can estimate the potential of deep prediction for applications. These differences are totally attributed to the natural characteristics of applications. Numbers of dependence instructions 2) Distribution of Prediction depth on blocks We introduce a quantitative method for denoting the potential of deep prediction of distinct applications intuitively. bzip and gzip are just the opposite in which the former group can be predicted aggressively and the latter are less predictable. The Speculative execution on Data dependence 1) Numbers of dependent instructions in applications Data dependence between hyperblocks arises from the load or register-read instructions (dependent instructions). 2) Distribution of data dependences Previous statistic of instructions number only reflects the ubiquitous of data dependence under block-level execution model. 10~∞) on tournament predictor. and others are around 5. For each predictor. mean value of prediction depth distribution that reflects the magnitude of prediction and even prediction accuracy. 100% 90% Expected speculation depth 80% 70% 60% 50% 40% 30% 20% 10% 0% mcf gzip vortex bzip2 parser art ammp equak e >64blocks 33~64blocks 17~32blocks 11~16blocks 9~10blocks 7~8blocks 5~6blocks 3~4blocks <=2blocks Expected Prediction Depth 12 10 8 6 4 2 0 Global Local Tournament ammp art bzip equake gzip mcf vortex parser Figure 6. thus each application can be divided into several block sequences with various block numbers. among which the depth 0 indicates blocks cannot be predicted. Figure 8 presents a statistic of the numbers of overall instructions. C. mcf and vortex exceed 10.

D. [8] in the context of Multiscalar processor. 1993. McKinley. W. Kathryn S. S. most applications have good predictability. the China Ministry of Education & Intel Special Research Foundation for Information Technology under contract MOE-INTEL-08-07. but whether they are suitable for aggressive executing depends on data dependences between adjacent hyperblocks. T. Maher. Seznec et al. 9). Jon Gibson. In Proceedings of the 26th Annual International Symposium on Microarchitecture. “Compiling for the Multiscalar Architecture”. pp. we introduce a quantitative method for denoting speculative execution potentials for distinct applications intuitively. Bringmann. the expected speculation depth is only up to 5 with less than 47. and subsequently refined by Jacobson et al. Franklin. C. exit predictors predict only the first branch that will leave a code region (such as a Multiscalar task). [5]. TRIPS toolset. Bennett. used saturating counters to predict multiple branches. ACKNOWLEDGEMENT This research was supported financially by the National Basic Research Program of China under contract 2005CB321601.5 <= 30 % <= 20 % <= 32 . S. REFERENCES [1] A. Predication of individual instructions was first proposed by Allen et al.5% <= 25 % <= 50 % 60 vortex bzip2 parser art equake Figure 11. Doug Burger. who proposed a framework to balance control speculation and predication through smart basic block selection at hyperblock formation. Hyperblock-based execution model can tremendously improve the size of instruction windows for high ILP and good performances. In Proceedings of the International Symposium on Code Generation and Optimization. As in equake. 185-195. Expected speculation depth VS. Y. Yeh and Y. we introduce a preliminary evaluation of aggressive execution model. Our experiments concentrate on the characteristics of dependence distribution and prediction depth in views of speculative execution both on control-flow and data-flow.5%. and analyze the distributions of data dependences between hyperblocks and their impacts on depth of prediction.5% <= 45 % <= 47 . and G. pp. Hank. Chen. in contrast. Two-level adaptive branch prediction. V. pages 51–61. CONCLUSIONS Figure 10. Hall. IV. In Proceedings of the 24th International Symposium on Microarchitecture.edu /~trips/dist/ Q. Sharma. studied multibranch predictors that typically predict 2 or 3 targets at a time.Y.5% <= 22 . Since enlarging a hyperblock would become more difficult under the constraints of compiler technology and inherent characteristics of applications. In Doctor of Philosophy at the university of Wisconsin 1998. it remains high within acceptable bounds.5% data dependence because data dependence within 6 blocks is 48. Uht. Smith. 313-325.5% <= 40 % <= 42 . 1997. 1995. N. Moreover. applications of SPECFP have better speculation depth that up to 64 with 50% data dependence. Sohi. Aaron Smith. A. although prediction accuracy on control-flow decreases while prediction depth increases. Nicholas Nethercote. Lin. 45-54. In Proceedings of the 25th International Symposium on Microarchitecture. Applications of SPECINT have low speculation depths less than 10(4~8 on average). 1992. bigger than 47. Bertrand A. D. [7]. 2006. Dynamic predication [3] predicates dynamically at run-time and allows predicating without predicated ISA support. and expected prediction depth which reflects the degree of predictability mainly depends on applications themselves rather than predictors. R. E.cs. Mahlke. [4] first developed the modern notion of hyperblock. M.utexas. in 1983 and implemented in the wide-issue Cydra-5. figure 11 shows expected speculation depth under the constraints of certain data dependence proportion.696% (Fig. Finally. 1994.45 Data Dependence Depth 40 35 30 25 20 15 10 5 0 mcf gzip vortex bzip2 parser art ammp equake V. S. Jacobson. T. Feb. high prediction accuracy and expected prediction depth with control-flow prediction mechanisms. and J. available from: http://www. Exit predictor is a region predictor first proposed by Pnevmatikatos et al. global and path-based exit predictors. Bill Yoder. RELATED WORK Speculative executions on hyperblocks mainly focus on two directions. the other is Predication which converts control dependence to data dependence by merging multiple control flows into a single control flow. many applications have high expected prediction depth. the Natural Science Foundation of China grant 60633040. Sindagi. “Effective Compiler Support for Predicated Execution Using the Hyperblock”. E. Control flow prediction for dynamic ilp processors. Vijaykumar. A. "Disjoint eager execution: An optimal form of speculative execution. The local. Yeh et al. Mahlke et al. Control flow speculation in multiscalar processors. pp.5% <= 35 % <= 37 . Data dependence depth in applications 70 mcf gzip 50 40 30 20 ammp 10 0 <= 27 ." Micro-28. and R. One is resolving control-flow between hyperblocks such as Multiple Branch Predictors and Region Predictors. the National Hi-tech Research and Development Program of China under contract 2006AA01A102-5-2. and Conte et al. Different dependence depth 3) Dependence Depth and Expected Prediction Depth Figure 10 shows dependence depth of applications. then Wallace et al. “Compiling for EDGE Architectures”. Dec. As we observes. extended by August et al. and K. Under this model. Expected speculation depth [2] [3] [4] [5] [6] [7] [8] 123 . Pnevmatikatos. Patt. folding of exit histories and hysteresis bits in PHT we used in this paper are proposed from Jacobson et al. Dec. N. In Proceedings of the 3rd International Symposium on High Performance Computer Architecture.

less reliant on memory size. Firstly. which describes potential relations among data items (attribute. INTRODUCTION The goal of knowledge discovery is to utilize those existing data to find out new facts and to uncover new relationships that were previously unknown.cost. In addition. transaction database is converted into an abstraction called Weighted Tree that prevents multiple scanning of the database during the mining phase.. The first one requires only one scan of the database and generates a . Agrawal et al. … im } be a set of items. In a large database it is possible that even if the itemset appears in a very few transactions. variant) in databases.component. India prethk@yahoo.e. Each transaction is assigned an identifier. This is taken to be the probability. and new solutions. and A∩B= .00 © 2009 IEEE DOI 10. anvs1967@gmail. cost etc. Discovering frequent itemsets.1109/ICCET. all the rules that have user defined minimum confidence are obtained. a transaction T is said to contain A if and only if A ⊆ T. and do not require a lot of communication costs between nodes. B ⊆ I. cost of the item bought and some other relevant information of the customer. the existing algorithms depend heavily on massive computation that might cause high dependency on the memory size or repeated I/O scans of the datasets. the task-relevant data. Most of the association rules mining algorithms to discover frequent itemsets do not consider the components such as quantity. The rule A B has confidence c in the 978-0-7695-3521-0/09 $25. Discovering rules for all given frequent itemsets and their supports. This is taken to be the conditional probability.e. However. II. the well-known Apriori algorithm [3] was proposed by R. Mining association rules is an important branch of data mining. Manipal. i. both A and B). These itemset are called the frequent itemsets. This data structure is replicated among the parallel nodes. Mining of association rules is to find all association rules that have support and confidence greater than or equal to the user-specified minimum support and minimum confidence respectively [2]. This motivated us to propose a parallel algorithm to discover all frequent itemsets based on the quantity of the item bought in a single scan of the database. still have to be found. this may lead to very high profit. where each transaction T is a set of items such that T ⊆ I. Support(A B)=P(A∩B)=s.2009 International Conference on Computer Engineering and Technology Parallel Method for Discovering Frequent Itemsets Using Weighted Tree Approach Preetham Kumar Department of Information and Communication Technology Manipal Institute of Technology. in an efficient manner with minimum utilization of the space and time. PROPOSED METHOD This algorithm is divided into two phases. has been the focus of many studies in the last few years.com Abstract— Every element of the transaction in a transaction database may contain the components such as item number. For each frequent itemset. This method achieves its efficiency by applying two new ideas. Parallel association rule mining algorithms currently proposed in the literature are not sufficient for large datasets. Many solutions have been proposed using sequential or parallel algorithms based on user defined minimum support. that do not heavily depend on the repeated I/O scan.194 124 transaction set D if c is the percentage of transactions in D containing A that also contain B. Let D. Mining association rules can be stated as follows: Let I={i1. Therefore these components are the most important information and without which it may cause the lose of information.. where A ⊆ I. i2. Further. Let A be a set of items. is relatively straightforward as described in [1]. That is.parallel. the existing algorithms discover all frequent itemsets based on user defined minimum support without considering the components of the transaction such as weight or quantity. The rule A B holds in the transaction set D with support s.2009. called TID. quantity.. in 1993. it may be purchased in a large quantity. Confidence(A B)=P(B|A) = Support(A B)/Support (A)=c.Weight I. be a set of transactions. Secondly. The quantities of items bought are not considered. An association rule is an implication of the form A B. This problem can be decomposed into the following sub problems: All itemsets that have support above the user specified minimum support are discovered. cost and other relevant information of the customer which lead to profit. for each frequent item assigned to a parallel node. P(B|A). P(A∩B). where s is the percentage of transactions in D that contain AUB (i. India. an item tree is constructed and frequent itemsets are mined from this tree based on weighted minimum support.com Ananthanarayana V S Department of Information Technology National Institute of Technology Karnataka Surathkal. The second sub problem. considered as one of the most important tasks. Keywords-attribute. The published parallel implementations of association rule mining inherited most of these problems in addition to the new costly communication cost most of them need. This motivated us to design a new parallel algorithm that is based on the concept of Weighted Tree.

if user defined minimum support is 2 transactions then an item D is not frequent and will not appear in the set of frequent itemsets even though if it is bought in large quantity and leads to more profit than other frequent items. In the second phase. A. Branch of a Weighted Tree Figure 2 represents the Weighted Tree corresponding to the Sample Database given in the Table 1. They are A. The weighted minimum support is the minimum weight an itemset has to satisfy to become frequent. In the sample database given in the Table 1. in which every element of each transaction represents either quantity or cost of the respective attribute or item. This tree is replicated among nodes and mined in parallel. Structure of Weighted tree The idea of this approach is to associate each item with all transactions in which it occurs. Figure 2 . indicates quantity purchased in that transaction or cost or other components. Removal of infrequent attributes of Weighted Tree C.The Weighted tree has two different nodes and its branch is shown in the Figure 1. Parallel mining of frequent itemset at different nodes for assigned items based on weight. Construction of Weighted Tree B. This results in a lose of information. The first type of node labeled with attribute contains attribute name and two pointers.Weighted Tree for Table1 The Parallel Algorithm This algorithm involves 3 steps. This tree is called an ordered Weighted Tree. the Weighted Tree obtained is reduced to contain only frequent items and then all the branches of the tree are sorted according to the increasing order of the frequency. Consider a sample database given in Table 1. This node represents the head of that particular branch. Sample Database Figure 1. Table 1. The second type of node has 2 parts. It may so happen that an itemset appears in a very few transactions in a large quantity or cost which leads to profit will not qualify as frequent itemset based on user defined minimum support.special disk-based data structure called Weighted Tree. Algorithm for constructing Weighted Tree Input: The database D Output: Weighted Tree for each attribute weight w in a transaction t ∈ D do begin Create a node labeled with weight w and add this 125 . one pointing to the nodes containing transaction ids and weights and another is a child pointer pointing to the next attribute. First part is labeled TID represents a transaction number or id and second part of which is labeled weight. If an itemset satisfies user defined weighted support then we say that it is weighted frequent itemset. Arrange the attribute lists of Weighted Tree in an increasing order of their weighted minimum support D. This node has only one pointer pointing to the next object having this particular attribute.

The attribute lists are sorted in increasing order of their weights and is shown in Figure 4 and is called Ordered Weighted Tree. for each attribute in a Weighted tree do begin if sum(weights of all nodes) < w_min_sup then remove that branch from the tree end For example. After ordering the frequent items by their support. This approach relies on distributing frequent items among the parallel nodes. If we consider w_min_sup = 10 then the attributes A. The ordered Weighted Tree is replicated among all parallel nodes. and so on up to processor m. then builds independent and relatively small trees for each frequent item in the transaction database called item tree. This replication is executed from a designated master node. The trees are discarded as soon as mined. Reduced Weighted Tree of Table 1 C. Parallel mining of frequent itemset at different nodes for assigned items based on weight. C has 12 and A has 19. B. Processor 2 builds tree for the next least frequent item. each processor successively receives one item. assuming m< n then processor1 builds the tree for the least frequent item. Each parallel node is responsible for generating all frequent patterns related to the frequent items associated to that node. The reduced Weighted Tree of Table 1. The process is repeated until all items are distributed. The attributes B and D are found to be infrequent. if we have m processors. starting from the least frequent. By doing so a full set of transaction database D. Finally. To do so each node reads sub-transactions for each frequent items directly from the Ordered Weighted Tree. we see that attribute D ahs weight 10. Figure 4. The each node of the item tree consists of item number and its count and two pointers called child and sibling 126 . After that processor 1 takes item m+1 and so on until all n items are distributed. In the above case if we consider min_sup=3 then only attributes A and C are frequent in the database. end containing only frequent items would be available to each processor to generate all globally frequent patterns with minimum communication cost at parallel node level. Each parallel node mines separately each one of these item trees as soon as they are built. Arrange the attribute list in an increasing order of their weighted minimum support In Figure 4. and n trees need to be built. C and D will be frequent in the database. Ordered Weighted Tree Figure 3. In other words.node to the respective attribute node. all frequent patterns generated at each node are gathered into master node to produce the full set of frequent patterns. Removal of infrequent attributes of Weighted Tree Input : w_min_sup = weighted minimum support Output: Reduced Weighted Tree. is shown in Figure 3.

If there are m processors and n trees. Whereas. Item Trees for items D. The weight of {AC} = 18 is greater than user defined weighted minimum support. In the worst case. C. Similarly. In our example. {C}. C}. Similarly. if there G transactions. then there will Gm nodes in the tree. Traverse item tree for I for each maximal path from a root to a leaf in the tree for an item I begin T= {elements from I to leaf with weight} if sum of weights of elements of [T] >= w_min_sup MI= {elements from I to leaf with weight} end C. B. If more than one frequent items share the same prefix. In this tree for D. this tree can be constructed in O (G) steps. A The tree for any frequent item say f contains only nodes labeled with items that are more frequent or as frequent as f and share at least one transaction with f. w_min_sup Output: set of all frequent itemsets at node i. Processor 2 would generate all frequent patterns related to item C. C}}. Reduction of Weighted Tree If there are m attributes. Arrange the attribute list in an increasing order of their weighted minimum support If there are n attributes in the reduced Weighted tree then arranging them in an increasing order of the weight is in O (n2). if we need to mine the Ordered Weight Tree in Figure 4. a branch is formed starting from the root node. Therefore at node 1 only frequent itmestes are one itemsets. Apply downward closure property[1] to get all frequent itemsets from MI which contains all maximal frequent itemsets.pointers. THEORETICAL ANALYSIS A. for every assigned item I. Input: A Ordered Weighted tree. then G transactions have to be read by the algorithm to construct Weighted Tree. Tree for D does not contain any other node except itself. the starting node for each parallel node would be D. Hence it is frequent. Construction of Weighted Tree In a transaction database D. (ii) The algorithm FP-Tree requires 2 scans to discover all frequent itemsets and mining patterns involve construction of 127 . the higher the support of a frequent item. Figure 5. If w_min_sup =10 then applying above algorithm. we see that at node 1. Therefore. {D}. C and A. The tree starts with the root node containing the item D. If there are k infrequent items then Reduced Weighted Tree will contain (m-k) attribute lists with at most G (m-k) nodes.{ A. If there are n frequent items then there will be n item trees. the first frequent item. then there will be GM nodes in a Weighted Tree. Therefore it is ignored. (i)This method is space as well as time efficient than Apriori since it involves candidate generation method and requires multiple scan over the database. For each subtransaction containing the item D with other items that are more frequent than D. If the average length of the transaction t is equal to m weights. using 2 processors machine with weighted minimum support is equal to 10. Hence Fw = { {A}. D. The first an item tree is built for item D. Figure 5 illustrates trees for frequent items D. at node 2. {D} and {A}. all frequent items which are more frequent than D and share transactions with D participate in building the tree. they are merged into one branch and their count fields are incremented. tree corresponding to A contains only one node and is also ignored. then reduction of Weighted Tree is in O (m). our method requires only one scan of the database. construct item tree for I containing transactions associated with I at node i. Also all its subsets are frequent by using downward closure property. Processor 1 finds all frequent patterns related to items D and A. item tree corresponding to C is built. if there are M weights and G transactions in D. Parallel mining of frequent itemset at different nodes for assigned items based on weight. In other words. The child pointer points to the following item and sibling pointer points to the sibling node. Merits In our research we have implemented this algorithm and found that this algorithm is better than Apriori and FP-tree algorithm. Based on this definition. if A has weight greater than B then tree for B is larger than the tree for A. m<n then this step is in O( n/m). Each processor will construct one tree at a time. III. The maximal path corresponding to this tree is {A. the smaller its tree is.

which involves header table. Lu. [7] O. Netherland” 53-87.Zaiane. J. [6] Ananthanarayana V. Hyderbad. of ACM-SIGMOD International Conference Management of Data. El-Hajj. “Scalable. and P. 2000. “Mining Frequent Patterns without Candidate Generation”. Yin. As such. Kluwer Academic Publishers. San Jos. December 2001 [1] Arun K Pujari “Data mining Techniques”: Universities Press(Indis) Private Limited. Y. 1-12.(2000). distributed and dynamic mining of association rules” In Proceedings of HIPC'00. IV. M. D. we are still working on it with the aim of extending the application of this algorithm to various kinds of databases. Pei Jian. 128 ... Dallas. Runying Mao(2004) ” Mining Frequent Patterns without Candidate Generation: A Frequent Pattern Tree Approach” . 2004. Han and M.the condition based tree. Subramanian. USA.1. Data Mining and Knowledge Discovery.(1998) “ Mining Large Itemsets for association rules. CONCLUSION [3] Han. 559-566. India. S. Morgan Kaufmann Publishers. [4] Han J. CA:. Our method uses item tree to mine maximal frequent itemsets and does not involve header table. of the IEEE 2001 International Conference on Data Mining. Springer Verlag Berlin.Heidelberg. [2] J. REFERENCES mining without candidacy generation” In Proc. Pei. No.K. Narasimha Murthy M. [5] Agrawal Charu and Yu Philip. J. 21. Proc. “Fast parallel association rule V. The Parallel algorithm for discovering frequent itemsets based on weight is a new method and is found to efficient when compared to Apriori and FP-tree. Bulletin of the IEEE Computer Society Technical committee on Data Engineering.. 20003. Kamber. TX. “Data MiningConcepts and Techniuqes”: San Franscisco. CA.R.

. are designed using VHDL. loop filter and voltage controlled oscillator (VCO). it will take a lot of CPU time. and the performance is limited. The result shows that this improvement scheme is able to increase the computing speed. sh_1983@sina. But. loop filter (LF) and voltage . To implement three-phase PLL using field programmable gate array (FPGA) by hardware is a new design scheme. such as phase discriminator. Then. Capital Normal University. etc. so phase tracking is one of the most important components in the system[1]. 100048. would cause measurement error because of spectrum leakage [4]. the components of this three-phase PLL. which means the width of sampling window is not equal to integer times of the signal.Song Yu College of Information Engineering. 1. reduce the usage of logic resource and track the variation of frequency and lock the base phase well. Discrete fourier transform (DFT) is the most commonly used method to detecting frequency and phase. The research of principle. This new method can both increase calculating speed and guarantee accuracy of results. then these modules are designed in VHDL language with blocking design method. harmonic and migration etc. if system can separate positive and negative voltage and feedback positive voltage[6]. The PLL fits in electric power system as well as other fields. exists in voltage signal. Introduction Flexible electric power transmission system which is widely used in electricity system requires accurate and real-time voltage phase information of the system. because this way could suppress harmonic wave well[3]. the method mentioned above cannot do anything about it. At last. It is verified that the design can save logic resource of FPGA largely. and it is also proved that the PLL can track the variation of frequency and lock the basic phase well. Analog or digital phase-locked loop (PLL) is widely used in general instruments to achieve zero detection [2]. system components and implementation algorithm based on FPGA is represented in this paper at first.74 129 has analyzed the influence of three-phase PLL detecting the error of phases when interference. In order to save logic resource of FPGA. This method is a pure hardware approach to parallel process signals and able to achieve high performance. Three-phase PLL would obtain better performance. [5] 978-0-7695-3521-0/09 $25. principle and basic structure of the PLL including phase discriminator. this improvement design is inspected on ALTERA’s FPGA Cyclone EP1C12Q240C8. China yuanhmxxxy@263. At last. an optimized method called chip area sharing is adopted. Beijing. the design is implemented on Altera’s FPGA chip EP1C12Q240C8.. This algorithm usually is implemented using DSP technology by software because of its complexity. At first. in this way. But using asynchronous sampling. However. a new method which combines CORDIC algorithm with look-up table algorithm is put forward in this paper to generate sine function.00 © 2009 IEEE DOI 10. when the rectifier notch appears in the signal. The frequency and phase of voltage in the electric network are indispensable. Principle of three-phase PLL PLL is composed by three main components: phase discriminator (PD).net. Sun Hao.1109/ICCET. such as asymmetry.2009.2009 International Conference on Computer Engineering and Technology Optimized Design and Implementation of Three-Phase PLL Based on FPGA Yuan Huimei. 2. In order to save logic resources of FPGA. loop filter and voltage controlled oscillator (VCO) are introduced. this design is improved by using chip area sharing improvement scheme.com Abstract An optimized method to design and implement digital three-phase phase-locked loop (PLL) based on FPGA is presented in this paper. A new algorithm which is combined with CORDIC algorithm and look-up table is proposed to generate sine function aiming to increase computing speed and improve the accuracy of results. At the same time.

1 Block diagram of three-phase PLL uabc (1) Where. Then.1. we can get ⎧ua = U s sin(ω0 t + θ1 ) ⎨ ⎩ub = U s cos(ω0 t + θ1 ) (3) In this way.1 we can see that the input of VCO u f (t ) is the output of adaptive low-pass filter. we can know. according to vαβ 0 = Avabc . formula (5) could be rewrite as ud = U s K L (θ1 − θ 2 ) = K d (θ1 − θ 2 ) (5) Where. FPGA design of PD From Fig. we need do multiplication 10 times.Three-phase equilibrium voltage can be described as follows formula (1). θ1 (t ) = ω1 (t )t + ϕ1 (t ) − ω0 t . uabc = [ua ub uc ] T could be transformed as uαβ = ⎡uα uβ ⎤ in αβ 0 coordinate through transform ⎣ ⎦ matrix A. and ω0 which is assumed as a constant here is center frequency of VCO. ud = uαuuα −uβuuβ (5) =Us KL sin(θ1 −θ2 ) Assuming PLL starts to lock. using θ 2 (t ) = θ (t ) − ω0 t to replace the formula (8).1. In order to guarantee both the performance of LP and stability of the dynamic system[7].1. K d = U s K L . we can know. proportionalintegral (PI) filter is used in this paper. As the result. Where. where θ(t) = ∫ [ω0 +Δω(λ)]dλ =ω0t + ∫ Kvuf (λ)dλ t t −∞ −∞ (8) Then. However. K p is proportional gain and Kl is integral gain. Then. PD’s feedback input uuα and uu β are the cosine and sine function of θ with the gain of K L . if we process following the design described as Fig. phase error (θ1 − θ 2 ) is zero or infinitesimal.1. and the transfer function of this integrator is θ 2 (s) K v (10) = U f (s) s 3.controlled oscillator (VCO). the VCO is able to be simply expressed as an integrator with a gain K v . three-phase voltage needs to be converted to two-phase voltage through the matrix operations at first. θ (t ) can be stated as follows: =Us KL sin(ω0t +θ1 −ω0t −θ2 ) KL cos θ 1 s ω0 KL ⎛ ⎞ ⎜ sin(θ ) ⎟ ⎜ ⎟ 2π = U s ⎜ sin(θ − ) ⎟ ⎜ 3 ⎟ ⎜ ⎟ ⎜ sin(θ + 2π ) ⎟ ⎜ ⎟ 3 ⎠ ⎝ T sin θ Fig. which are shown as formula (4): ⎧uuα = K L cos θ ⎪ ⎨ ⎪uu β = K L sin θ ⎩ The output of PD is expressed as follows: (4) 130 . this two-phase voltage should multiply with the closed-loop feedback and do addition u operator to get the d-axis component d we need. the VCO module should generate an output of phase signal θ 2 . in the calculation of three-phase PLL. which is shown in Fig. In order to get a transfer function of the phase of PLL. The output of PLL. we can obtain ⎛ 1 2⎜ A= ⎜ 3⎜ ⎜0 ⎝ 1 2 3 2 1 ⎞ − ⎟ 2 ⎟ 3⎟ − ⎟ 2 ⎠ θ 2 (t ) = ∫ K v u f (λ )d λ t −∞ (9) (2) Then. the transfer function of LP would be described as U f (s) K = Kp + l (7) U d (d ) s Where. FPGA design of three-phase PLL 3. so the value of sine function in (5) may be equal to the phase error proximately. From the Fig. addition 10 times and trigonometric calculation 4 times totally.

addition 5 times and trigonometric calculation twice to complete the calculation. At the end of the system. After analyzing the cos θ 2 2 ⎤ ⎡ ⎢cosθ cos(θ − 3 π ) cos(θ + 3 π )⎥ Tdq = ⎢ Where. we find out if we use d. CORDIC works by rotating the coordinate system through constant angles until the angle reduces to zero. ⎡ud uq ⎤ = Tdq uabc ⎣ ⎦ T (11) 4. a sine and cosine function generator is designed in order to supply the feedback for this system. we use time division multiplexing (TDM) technology to reuse one multiplier iteratively. Because of the symmetrical characteristic of sine and cosine function.2. but it is a waste of logic resource of FPGA. K p . CORDIC would not take up any memory of FPGA. look-up table would take up a lot memory resource of FPGA. z (c − z ) z − az + b 2 (13) . Kv 3 2 sin θ θ2 2π cos(θ + ) 3 1 s Kl Fig. Kl and T decide the dynamic performance of the system. and a lot of logic resource of FPGA is saved at the same time. the whole system only need one hardware multiplier and logic resource of the FPGA is able to be saved greatly. One of the solutions is CORDIC algorithm [8]. The design of this PLL system is shown in Fig. Kc = a= Kd Kv K p c + 2 Kd Kv K p + 1 K 1 . However. we could obtain 1 3 sin θ ] ud = ua cos θ + ub [− cos θ + 2 2 ⎡ 1 ⎤ 3 sin θ ⎥ + uc ⎢ − cos θ − 2 ⎣ 2 ⎦ . this PLL system is unstable because the opened-loop zero point is out of the unit circle. A new scheme which is combined with CORDIC and look-up table is proposed in this paper in order to improve the CORDIC or look-up table individual.2. c = 1 − l T and T is sampling Kp 1 + Kd Kv K p frequency. In this way. The angle offsets are selected such that the operations on X and Y are only shifts and adds. there is is locked by PLL. to complete all of the computation would use nine multipliers. so to resolve this problem. we only need to build up a table 3. the workload and complexity of computation are greatly reduced. Another important part of this system is sine. q coordinate transformation directly. After analyzing the design described above. b= Kd K p ( z) 1 + Kd K p (z) Kd Kv K p 1 + Kd Kv K p = Kc . FPGA design of LF and VCO The result of input voltage signal θ1 and θ 2 of the system in (6) implements the unit negative feedback of the system’s output. and the program only needs to do multiplication 5 times. Optimization of design As we described above. according to phase information which 131 . but it also have flaw which is the accuracy of CORDIC is limited by its own. And the closed-loop feedback of the system is Φ( z ) = Where. Nevertheless. ⎥ ⎢ sin θ sin(θ − 2 π ) sin(θ + 2 π ) ⎥ ⎢ 3 3 ⎥ ⎣ ⎦ Then.2 Block diagram of FPGA design of three-phase PLL system.It will take up a lot of logic resource in that way. we find that the process has its time sequence. Analyzing the formula (13) can shows when T > 2 K p / Kl . In this way. Every multiplication will have a fixed slice to complete computation. cosine function generator which is usually implemented by using the method of look-up table. uaubuc ua ub Kl (12) uc cos(θ − 2π ) 3 θ1 θ We only need to implement the formula (12).

4 Hardware simulation result of this system Table2 Usage of resources on EP1C12Q240C8 before and after optimization of three-phase PLL EP1C12Q240C8 LES (Look-up table) Resource on FPGA 12060 Usage of resource 2546 13824 1210 0 983 9216 Rate of resource utilization/ % 21 6 10 0 8 4 ua ub sin θ cos θ uc cos(θ − cosθ 2π ) 3 ua cos θ + ub cos(θ − +uc (θ + 2π ) 3 2π ) 3 Kv 1 s Kl Kl Memory/Bit ( Look-up 239616 table) LES(CORDIC) 12060 3 2 2π cos(θ + ) 3 θ1 − θ2 θ Fig. u2. also it can improve the accuracy of CORDIC. if the input has steady phase. Table 1 lists the usage of logic resource on ALTERA’s FPGA Cyclone EP1C12Q240C8 in the way of look-up table. 3 …9 means the slice taken up by calculation. A new method which combines CORDIC algorithm with look-up table algorithm has been put forward to generate sine function. Basic modules of the PLL including PD.8KHz .3: Block diagram of FPGA design of three-phase PLL after optimization Memory/Bit ( 239616 CORDIC) LES ( Combined 12060 method) Memory /Bit ( 239616 Combined method) 5. Here. and the dataout is the output of this three-phase PLL. Fig. we can see that this three-phase PLL could stably output phase information on the third 6. the number 1. 132 . In Fig. The results show that the utilization of resource on FPGA is greatly reduced after optimization. In Fig. and results show this new scheme is able to both increase calculating speed and guarantee accuracy of results. Conclusion This paper has proposed an optimized method to design and implement digital three-phase PLL based on FPGA. LF and VCO are designed in VHDL language as IP cores.3 shows the design of three-phase PLL after optimization. clk is the system clock. 2. u3 represents the three-phase voltage. And the processing speed also meets the requirement of the system. here we set it as 25KHz.4 shows the result of hardware simulation. K p = 0. CORDIC and combined way.including the information of a quarter of one period to minimize the table [9]. Results of experiment We use VHDL to implement this design and verify the design on ALTERA’s FPGA Cyclone EP1C12Q240C8. Table 2 lists the usage of resource on Cyclone EP1C12Q240C8 before and after optimization of this three-phase PLL. In the picture. Table1 Usage of FPGA’s logic resource Method Look-up table CORDIC Combined method LEs 242 1335 675 Memory( ROM)(bit) 180224 0 9216 M4K 4 0 3 period of fundamental wave. and an optimized method called chip area sharing is adopted in order to save logic resource on the chip.4. we set K v = 128 .5 and sampling frequency T = 12.3. It is also proved that the PLL can track the variation of frequency and lock the basic phase well. Fig. Fig. The usage of logic resource in this system is greatly reduced by improvement scheme mentioned above. u1.

Thorp J. 1994.. References [1] Sang-Joon Lee. Li Xiao-Chun. 8: 100-102.G.1996. Kang. IEEE Transactions on Power Delivery. 2006B58). Power Electronics. 2007. and Rate of Change of Frequency. Jun-Koo Kang.1996. Synchronized Sampling and Pastor Measurements for Relaying and Control. [7] YANG Shi-Zhong. Thirty-Fourth IAS Annual Meeting IEEE Conf. C. (4): 80-82. 6(43): 609-615.G. A New Measurement Technique for Tracking Voltage Phasors. [9] Wu Po. Lee. Hsieh. Sul. K. K. 2004. Second Edition. 1999: 2167. 1978. Hung. S. Springer Press Ltd. Record of the IEEE’99[C]. 2000: 431-438. Seung-Ki Sul. Improvement and Implementation of One Phase Power Phase-Locked Loop based on FPGA. Adamiak M. GUO Yu-Hua.A Phase Tracking System for Three Phase Utility Interface Inverters. [8] Uwe Meyer-Baese. Foundation of Phase Lock Technology. on Power Electronic 2000. Record ′99.2172. A New Phase Detecting Method for Power Conversion Systems Considering Distorted Conditions in Power System. C. J. [5] Kyo Chung. 19(1): 339-442. Phase-Locked Loop techniquesa survey. S. Digital Signal Processing with Field Programmable Gate Arrays. IEEE Trans. Local System Frequency. J. IEEE Transactions on Industry Electronics. 133 . Industry Applications Conference Conf.1999: 2167-2172. [6] J. Beijing: People Post-office Press. IEEE Transactions on Industry Electronics. [2] G. [4] hadke A.Acknowledgment The authors would like to thank Beijing Science & Technology Government for financially supporting this work by project of Beijing Technology New Star (No.A New Phase Detecting Method for Power Conversion Systems Consid-ering Distorted Conditions In Power System. (2) : 79-82. WEN Liang-Yu. [3] IEEE Working Group Report. 43(6): 609615.

Zhejiang. staff code. A security access control model is presented based on the above study. Data access efficiency is increased because most applications will concentrate on active data storage or static data storage where the most recently and frequently used data are stored. product information. information is stored in numbers of physical distributed storage nodes. Hangzhou. The information is so much 1. we study the architecture of data center according to the characteristic of distributed computing environment. 310018. and the Science & Technology Research Program of Zhejiang Province. China (No. it keeps privacy and consistency of data in system. department code in each subsystem and so on.2006BAF01A22). customer information.1109/ICCET. we discuss the data storage architecture structure of data central system structure.cn Pan Zhou College of Computer and Information Engineering. In this paper.00 © 2009 IEEE DOI 10. a layered access control model with classified distributed storage in distributed environment is presented[1. static data storage. whatever the request comes from domestic or external.4]. These information is shared in various subsystems. 2. In the model. and has high frequency to use. backup data storage and data warehouse. The data central collects the massive primary data and all kinds of regeneration summarized information in each subsystem. the material information. Introduction It is particularly important while the information security in distributed computing system. The data logic security shield layer is to shield the difference among the database safety control organizations. The security system is the critical part of data centre in the distributed computing environment.2009. Operation layer and application layer are to control the authorized access for system managers and application end users respectively.2006C11239) 978-0-7695-3521-0/09 $25. Data is well protected by good security design and implementation. including t the model of process control standards and its executing log. The second kind of information is the model meta-data in each subsystem[2]. which includes active data storage. [6] According to information characteristics. In the distributional data management environment. The encrypted centralism authorization provides trustable login.edu. Backup and restore quickly and reliably is a mandatory requirement of digital system. This is the main information in distributed computing executing environment.cn Abstract In order to improve the data access efficiency and enhance the security in IT systems. 1 Foundation item: Project supported by the National Science & Technology Pillar Program. China rwl@zjgsu. including system parameter. It is appreciated after the model has been applied to Zhejiang Fuchunjiang Limited Company. data logic security shield layer and application layer. prevent the invasion of illegal user. Hangzhou. and maintain the integrity and uniformity in the procedure of data gathering. Data security access is controlled through a 3-layered access control model based on RBAC model[3. Zhejiang Gongshang University.edu. transmission and processing in each subsystem of distributed computing environment.2009 International Conference on Computer Engineering and Technology Research on the Data Storage and Access Model in Distributed Environment1 Wuling Ren College of Computer and Information Engineering.zjgsu. Zhejiang. The security system guarantees data can and only can be accessed by the authorized users. China zhoupan@pop. therefore the design of data central must realize security control of each kind of database information. supplier information. The architecture of data storage in distributed computing environment The design of data central of distributed computing environment must consider the security of information. The three layers include operation layer.89 134 . we must adopt effective method to maintain the integrity and uniformity of visiting to these data in various subsystems. China(No.2]. 310018. storage. Zhejiang Gongshang University. The first kind of information is the foundation data in the distributed computing environment. The information may be roughly divided into four kinds in the distributed computing environment. The storage architecture is logically organized into four classes.

Another part is the historical data storage. The primary data gathers from each related subsystem. The method of data transfer includes various date storages as followings: Data transfer:Information transfers between the active data storage and historical data storage.1 Information protection In the distributed computing environment. The design of database security The data security including two aspects. In the regular circumstances. and the medium is the hard disk. Figure 1. We can use the database tool EXPORT/IMPORT which the database management system provides to realize. The active data storage is organized by the database form. and the storage medium is the hard disk. parameter file refer to the operation system-level file which need security control and should be controlled by the application system. In figure 2. The information in the reserve data storage is their backup. and guarantee the security of information. Another part is the static data storage.that we must control their depositing cycle. the complete safety control model is as figure 2 shows. hard disk expiration and so on). example: the operation mistakes. The third kind of information is the process information. It means information transferred among the database or tables. the magnetic tape or CD-ROM. such as the system foundation information. It deposits information which id used in the lowest frequency in the system. 3. as figure 1 shows. system configure file. That information is the basis of dynamic adjusting the system procedure. It can use the file copy tool to realize the procedure. It deposits the information which is used presently and in low frequency in the system. Static data storage and reserve data storage is organized by file form. and guarantee the security of information. If the database management system has not provided this kind of tool. One of the parts is the current data storage. we only record and retain the information which exceptionally occurs or not yet completely processing in the system. the information of active data storage and static data storage is the protection object[7]. The former avoids the unintentional destruction of information (for 135 . other operators can only be able to visit the distributed computing environmental information through the application subsystem. compiles processing the history data material. then it need program to realize this function Data backup/recovery: (1) Information transfers between the active data storage and reserve data storage . 3. which we call the data warehouse. We can use programs to realize the procedure. in order to prevent the system from accident collapses. The data central data storage system is composed by five parts. such as some original documents processing in several years ago. one is the data protection (or we call anti-damage nature). Except administrator. another aspect is the data security. The latter refers to the authority of data. We must set up the backup system for the above data storage in order to prevent the system from accident collapses and lose of data. It deposits the information which is used presently and in high frequency in the system. the primary data and so on. These two parts constitute the active data storage. (2) Information transfers between the static data storage and reserve data storage.2 Information Access Control In the distributed computing environment system. The last part is the backup data storage. The architecture of data storage in distributed computing environment 3. They will dump the data to text file. the computer hardware breakdown. namely who can use the information and how to use it. modeling the executing parameters and the management control standard. It can use the hard disk or the magnetic tape as the reserve medium to backup the information by the copy/restore tool which provided by the database management system or the operation system. File system safety and security control is to carry out the security control of system configure file. Thus the backup information is stored in the hard disk as files. such as incautious deletion. One of the tasks of distributed computing system is to process this information effectively. Data export/load:Information transfers between the static data storage and historical data storage. The fourth kind of information passes through the distributed computing environment processing.

8. with the encryption method which defined by user. Tripunitara. operated object). DC. The system has been applied to Zhejiang Fuchunjiang limited company.02. operation layer and application layer. An Efficient Access Control Model for Highly Distributed Computing Environment. doi:10. ACM Transactions on Information and System Security (TISSEC).846. the file system safety control and the security mechanism. Ferraiolo.1007/11603771.3.9.knosys. such as (operator. EAI Journal. operation. [8] Ninghui Li. IEICE Transactions on Communications 2005 E88-B(3):846-856.parameter file with the safety control mechanism of operating system. Primary cognizance data central information authorized operation has the same effect as well as data central manager's security control. Yang Jiong. the data distributed in each physical storage node is organized into four components which includes active data storage. [2] Bruce Lownsbery. The data central management system provides the data central safety control level. Nov 2006. 4 Summary This paper discussed the architecture structure of data centre in the distributed computing environment. In classified distributed storage model. to extend flexibility. The main function of the data logic security shield level is to shield the difference among the database safety control organization. 11. [7] Shigeki YAMADA. [4] J. USA.Access Control for Security and Privacy in Ubiquitous Computing Environments. Classifying Large Data Sets Using SVMs with Hierarchical Clusters. pub: Springer Berlin/Heidelberg. [3] David F. The 3-layered access control model in distributed computing environment 136 . Helen Newton. so as to provide the consistent support way for the application system security control mechanism. which delegates privileges to other operator or group. It was well evaluated.1093/ietcom/e88-b. The layered access control mode is presented based on RBAC model. Visual modeling and formal specification of RBAC constraints using semantic web technology. Artech House.-J.1016/j.007 I. Kwon. A triple is used to describe the three objects: (operator.(2008). which data the operator can access or modify. and presented a layered security access control model. Effective time can be also added in the triples to solve the temporary authorized problem. manager of operator). Figure 2. It decides functions the operator can use. 2003. Distributed Computing .. namely management layer. Role-Based Access Control. Moon. 2003.4 n. Application operating is granted by the security control layer. Application system defines other 2-truple and triples. [6] Yu Hwanjo. [5] Soomi Yang.IWDC 2005. Web Services and Enterprise Integration. The security control layer determines its granting by matching the certain triple in triple database.The Key to Enduring Access: Multi-organizational Collaboration on the Development of Metadata for Use in Archiving Nuclear Weapons Data. The security control layer is responsible for the authority of the application system.2008. Hammer. 5. Mahesh V. v. et al. how long the privilege is available. doi:10.References [1] K. Security analysis in role-based access control. doi:10. Security control is distributed in three layers. Based Syst. 2001. Washington. Knowl. It takes the role of privilege grant. C. SIGKDD’03. static data storage. backup data storage and data warehouse.

the main objective of gene selection is to search for the genes. This paper focuses on finding small number of genes that can best predict the type of cancer.Coimbatore. the group to which a new individual belongs to is determined accurately.38 . India mallikapanneer@hotmail. and can yield a much smaller and more compact gene set without the loss of classification accuracy [8][9]. The methodology was applied on two publicly available cancer databases. and a set of variables describing different characteristics of the objects (i. Euclidean distance [13] and the some of the classification algorithms used were SVMs.. Coimbatore.e. heterogeneous cancers can be classified into appropriate subtypes and the challenge of effectiveness in cancer prediction lies when there are high dimensional. Saravanan. gene selection is one of the critical aspects [4[5][6][7]. k-nn [14]. The key advantage of supervised learning methods over unsupervised methods like clustering is that by having an explicit knowledge of the classes the different objects belong to. Signal-to-Noise. chi-squared test. 978-0-7695-3521-0/09 $25. With the help of gene expression obtained from microarray technology. SVM one-against-one.all and one-againstone method were used with two different kernel functions and their performances are compared and promising results were achieved. Selection of important genes using statistical technique was carried out in various papers such as Fisher Criterion. The paper uses a classical statistical technique for gene ranking and SVM classifier for gene selection and classification.e. RBF. Gausssian. I.2009. This paper used the two publicly available cancer dataset Lymphoma and Liver. the classes. independent variables). Department of Computer Science Sri Ramakrishna College of Arts and Science for Women. Efficient gene selection can drastically ease computational burden of the subsequent classification task. these algorithms can perform an effective feature selection if that leads to better prediction accuracy.all. traditional t-test. Supervised machine learning can be used for cancer prediction. 2008 proposed a classification method to classify the causality of a disease is of two stages.com Abstract-Data mining algorithms are commonly used for cancer classification. Tzu-Tsung [27].00 © 2009 IEEE DOI 10. In response to the rapid development of DNA Micro array technology. classification methods and gene selection techniques are been computed for better use of classification algorithm in microarray gene expression data [2][3].com In classification analyses of microarray data. In microarray classification analyses. Karunya University. which these objects belong to (i. which keep the maximum amount of information about the class and minimize the classification error [10]. Tibshirani [17] successfully classified the lymphoma data set with only 48 genes by using a statistical method called nearest shrunken centroids and used 43 genes for SRBCT data. The input to such models is a set of objects (i.1109/ICCET.2009 International Conference on Computer Engineering and Technology An effective classification model for cancer diagnosis using micro array Gene expression data Dr. In 2003. Prediction models were widely used to classify cancer cells in human body. The goal of classification is to build a set of models that are able to correctly predict the class of different objects. SVM one-against.V..e.Mallika. Department of Computer Applications.. From the samples taken from several groups of individuals with known classes. SVM one-against. training data). and MannWhitney rank sum statistic [12]. INTRODUCTION Micro array technology has made the modern biological research by permitting the simultaneous study of genes comprising a large part of the genome [1]. Lipo Wang [11] 2007 proposed an algorithm in finding out minimum number of gene up to 3 genes with best classification accuracy using CSVM and FNN. Karunya School of Science and Humanities. selecting a compact subset of discriminative genes from thousands of genes is a critical step for accurate classification. Once such a predictive model is built. it can be used to predict the class of the objects for which class information is not known. dependent variables).India tvsaran@hotmail. Gene Selection mechanism with 137 Keywords: Classification. which uses a part of the dataset as training set and the uses the trained classifier to predict the samples in the rest of the data set to evaluate the effectiveness of the classifier. R. [11]. Genetic algorithms (GA) [16] Naïve bayes (NB)[15]. With DNA microarray data.

Yt). any value less than this will result in significant effects. How to effectively extend SVM for a multi-class classification is still an ongoing research issue [23]. if yj ≠ i. This paper efficiently uses the varieties of SVM such as one-againstall method (SVM-OVA) and SVM one-againstone (SVM-OAO) method with heavy trailed RBF and Gaussian kernel function. It constructs n (n-1)/2 classifiers in which each one is trained on the data for 2 classes.j =1…t. In this paper the α value is set at .. [23]. the paper compares the SVM one-against.05. Each binary SVM classifier creates a decision boundary. (X2. then the effect is said to be significant. which can be represented by K (Xi. The paper uses the pvalues to rank the important genes with small values and the sorted numbers of genes are used for further processing B. Gene Ranking -ANOVA p-values ANOVA is a technique. Xn ∈ Rn and n=1…t.Of all the information presented in the ANOVA table. SVM oneagainst-one classifiers with kernel functions gausssian and RBF (heavy tailed RBF) using two databases for Lymphoma cancer and Liver cancer. SVM one-against-all. e. Yn =1. which can separate the group it represents from the remaining groups For training data t = (X1. The approach chosen in this paper is the one-way ANOVA that performs an analysis on comparing two or more groups (classes) for each gene and returns a single p-value that is significant if one or more groups are different from others. The SVM-OAA constructs ‘n’ binary SVM classifier with the ith class separating from all other classes.To extend SVM for multi-class classification. SVMs are able to find the optimal hyper plane that minimizes the boundaries between patterns [19]. if the p value for the F. Gene Selection-SVM one-against-all and oneagainst-one SVMs are the most modern method applied to classify gene expression data. Furthermore. The ANOVA test is known to be robust and assumes that all sample populations are normally distributed with equal variance and all observations are mutually independent. SVMs are power tools used widely to classify gene expression data [21][22]. The SVM-OAO method was first used by [28]. Small p-values indicate a low probability of the between-sample variation being due to sampling of the within-sample distribution. K is the class labels corresponding to Xn. Their performances are compared with their respective training time and accuracy. SVMs were formulated for binary classification (2 classes) but cannot naturally extend to more than two classes. small p-values indicate interesting genes. if yj =i.g. (1) and ξji ≥ 0. The very small p-value indicates that differences between the column means are highly significant. RBF kernel can be able to give the same decision as that of RBF network. which works by separating space into two regions by a straight line or hyper plane in higher dimensions. The most significantly varying genes have the smallest p-values.ratio is less than the critical value ( ). SVMs were designed with SVM one-againstone.individual or subset gene ranking as the first stage and classification tool with or without dimensionality reduction as the second stage. Xj) ≡ e-γ | Xi – Xj |2 (2) Where Xi is the support vector of the ith class and Xj is the support vector for the new higher dimensional space and γ is the tuning parameter. Where each data xi is mapped to the feature space by function φ and c the penalty parameter. [18]. II METHODOLOGY A. This paper proposes an efficient methodology using statistical model for individual gene ranking and data mining models for finding minimum number of gene rather than thousands of genes. (wi)Tφ(xj) +bi ≤ -1+ ξji. The nth SVM solves Min ½ (wi)Twi + C Σ tj=1 ξji i i w biξ (wi)Tφ(xj) +bi ≥ 1 . The classification problem for the training data with ith and jth class is shown as 138 . which can be used to give good classification accuracy. Y1). The probability of the F-value arising from two identical distributions gives us a measure of the significance of the between-sample variation as compared to the within-sample variation. which is frequently used in the analysis of microarray data. The Radial basis function (RBF) is the most popular choice of kernel functions used in Support Vector Machines.ξji .all. This paper gives effective methodology to classify a multi-class problem . while any value greater than this value will result in non-significant effects. to assess the significance of treatment effects. Y2)…(Xt. and to select interesting genes based on P-values.

Fig. Lymphoma dataset The methodology was applied to the lymphoma dataset with 4026 genes and 62 samples with 3 class namely DLBCL.2 shows the best prediction accuracy for the gene pair (4.04% 91. CLL and FL.70% 88. Furthermore Table.77 96. the databases were trained using SVM one-against-all and one-against-one. TABLE.77 OAAGAU 91. Figure.95% 91.55 OAOGAU 91.ξjij .49% OAO RBF 91. the subtypes pf Lymphoma cancer.77 98.6) was achieved using SVM one-against-all method with RBF kernel function (OAA-RBF) from the top 20 genes.16 96.39 96.62% 88.39 98. classification was performed for 5 runs.1 plots the gene pairs (4.1 shows the average Training accuracy for training all possible gene pairs. so that over 5 runs of the classifier.Min ½ (wij)Twij + C Σ tj=1 ξjij ij ij ij w b ξj (wij)Tφ(xt) +bij ≥ 1 . For n number of top genes all possible combinations are n (n+1)/ 2!Using these combinations. if yt ≠ j. if yt =i.1 gene expression level of gene pairs showing good separation of different classes for Lymphoma dataset TABLE.50% 88. all the samples were used as test set.77 98.6) that shows a good separation of 3 classes-DLBCL.39 98.All possible combinations of the top n genes were generated. Then the classifier was used to predict the samples in the testing dataset.3 Each time the classifier was trained.98% 88. The performance of the classifier was validated using cross validation (CV) technique with 5folds. FL. proposed method proves well for the number genes needed to achieve best accuracy.39 In comparison with previous works.94 95. Table .LYMPHOMA DATA Top gene 10 20 30 50 100 OAA-RBF 96. A.77 98.02% 91.99% 90. a different test set was used. ξjij ≥ 0.32% OAO GAU 91. which can be able to predict for a doctor for the 3 subtypes of lymphoma.77 98.39 OAO-RBF 95.2. (wij)Tφ(xt) +bij ≤ -1+ ξjij.16 96.1 AVERAGE 5 FOLD CROSSVALIDATIONACCURACY OF ALL ALGORITHMSLYMPHOMA DATA Top gene 10 20 30 50 100 OAA RBF 91.54% 88.33% OAA GAU 91.39 93.32% 92.04% 88. using the 4 parts as training and the other as testing. In all the cases SVM one-against-all was superior to all other methods were well proved. It should be noted that all the genes after ranking were given numbers in ascending order. The dataset was with few missing data.14% The average 5-fold accuracy for each run was calculated and average error rate in training were calculated. the samples in the training dataset was randomly divided into 5 equal parts.18% 86.97% 91. The method used the gene subsets that achieved 100% CV accuracy for the training samples and retrained the classifier. Comparisons of results with previous work are shown in Table. CLL with clear boundaries. 139 .39 98. This section reports the experimental results of all the datasets exhibiting the SVM varieties such as one-against-all and one-against-one with different kernel functions RBF and Gaussian.16 96. COMPARATIVE RESULTS OF TESTING ACCURACY OF ALL ALGORITHMS.71% 87. (3) III RESULTS AND DISCUSSION The proposed methodology was applied to the publicly available cancer database namely Liver and Lymphoma cancer.90% 90.94 95. The KNearest neighbor algorithm as used by [18] with k=3 was used and the missing data were filled. For the training dataset with 4026 X 31 dimensions ANOVA p-value was calculated for each gene and the top ranked genes were selected .43% 91. Half of the samples were picked randomly for training and all the samples for testing.77 96. For 5 folds.

As with that of Lymphoma dataset.87 96.15 94. The problem is to predict whether the samples are from cancerous or non-cancerous tissue.edu/hcc) has 2 classes HCC and non-tumor liver for 1648 genes for 156 observations in which 82 are from HCC and 74 from non-tumor livers.3 show the results for each SVM variety.33 94. From the results of the two datasets. The paper used a classical statistical technique for gene ranking.3 otherwise the tissue is of a patient without tumor.23) for liver dataset.87 92.LYMPHOMA DATA Method Proposed method Extreme Learning machine (24) Bayes with Local SVM (26) SVM-KNN (25) Accuracy 98.69 94. Randomly half of the observation was selected for training and all samples for testing.31 OAAGAU 94.95 OAORBF 94. The figure shows the number of gene pairs that achieved 100% CV accuracy using the top 100 genes for Liver and Lymphoma dataset.67 IV.Liver Dataset Liver dataset (http://genomewww.stanford. SVM one-against-all could achieve the best prediction accuracy. As observed in the figure. CONCLUSION In this paper an efficient classification method for cancer diagnosis problem using microarray data was presented.51 94.79 OAOGAU 94.33% 93% 96% No of genes 2 10 30 50 Figure.87 97. SVM one-against-all with RBF kernel function has higher or nearing 100% training accuracy gene pairs for the liver and lymphoma dataset. which were with the prediction of subtypes in Lymphoma cancer. Figure.TABLE .87 94.87 96. On application of SVM one-against-all and one-against-one to the Lymphoma and Liver datasets it was found that SVM one-against-all with RBF kernel function outperforms with the other.75 and greater than –0. The methodology adopted for lymphoma dataset was applied. the Liver dataset were from the cancerous and non-cancerous tissue. a doctor can able to diagnose that a patient has HCC if and only if the expression level is less than 0.3 Maximum number of gene pair that achieved 100% CV Accuracy for Lymphoma and Liver dataset. Unlike the Lymphoma dataset.2 Plot showing best separation for liver dataset that achieved the best test accuracy for Liver data B.4 TABLE.39% 97. The results were promising when compared with previous works as well.44 95.87 96. This is depicted well in Table. it was well proven that SVM one-against-all with RBF kernel can achieve better prediction accuracy compared to all other varieties for liver dataset as well.23 96.87 92. Though the computational time for training was longer. The future work shall extend with different classifier and gene ranking methods From the Plot in Fig. top 100 genes 400 300 200 100 0 Lymphoma Liver OAA-RBF OAA-GAU OAO-RBF OAO-GAU . 140 .23 91.15 96.3 RESULT COMPARISON. it was found that SVM one-against-all classifier with RBF kernel function (SVM OAA-RBF) achieves better classification results. Top gene 10 20 30 50 100 OAARBF 94.79 95.2 the gene pairs (13.4 COMPARATIVE RESULTS OF TESTING ACCURACY OF ALL ALGORITHMS FOR LIVER DATA Fig.

and Wei Xie. [11] Lipo Wang. Bruhn L. [10] Ji-Gang Zhang. pp. [28] U. Sengupta.MA. Bioinformatics [23] Chih-wei Hsu and chih jen Lin. 2007. Ching-Han Hsu. T. Evo Workshops 2005:74-83 [22] Yoonkyung Lee. Jaeger. IEEE transactions on neural networks. A study on gene selection and classification Algorithms for classification of microarray Gene expression data. March 15. An Evaluation of Gene Selection Methods for Multiclass Microarray Data Classification. vol.L.4. and G. Ruzzo.REFERENCES [1] Per Broberg. BMC Bioinformatics 2007. 2007. Gene selection for classification of microarray data based on the Bayes error. S-221 87 Lund. Proceedings of the fourth annual international Conference on Computational molecular biology 2000. P. Guang-bin Huang. Vol. Inza I.Troyanskaya. A greedy correlation-incorporated SVM-based Algorithm for gene selection.Burges. Campbell C. Walter L.September 2007 [25] Sung Bac cho. 1.520-525. Suh E. and Sierra B: Gene selection for cancer classification using wrapper approaches. July. B. Michele Sebag “ Bayesian learning with support vector machines for cancer classification with gene expression data”.J. Bioinformatics Advance Access published August 24.MIT Press . 18. Nachman I.17.Saratchandran. Bayesian Classification of DNA Array Expression Data. Dougherty ER :Optimal number of features as a function of sample size for various classification rules. 104-117. [18] O. BMC Bioinformatics 2005. Tibshirani. 4. vol. 193-214. [13] Yvan Saeys. Technical Report UW-CSE-2000-08-01August 2000 [16] Juan Liu. Lee Hood. MichH Schummer. 18:1332-1339 [5] Diaz-Uriarte R. AstraZeneca Research and Development Lund. Supervised methods with genomic data: a review and cautionary view. 2003. 19 no. Improved gene selection for classification of Microarrays. Cheo Koo Lee. Data analysis and visualization in genomics and protenomics 2005. Hong-Wen Deng.C. [12] Venu Satuluri. Narasimhan.B. pp. Sweden 7 May 2003 [2]Hong Chai and Carlotta Domeniconi. Hong Hee won. 21st International Conference on Advanced Information Networking and Applications workshop. A comparison of methods for multiclass Support Vector machines. 21:1509-1515 [7] Jirapech Umpai T.Smola Pair wise classification and Support Vector Machines . R. W. Springer Link. Yakhini Z . 2007. Expert Systems with Applications 34(2008) 375-383. 18(8): 1373-1390 . [6] Hua J. Michele Sebag. Xiong Z.3. [8] Ben-Dor A. 2005 [27] Tzu-Tsung Wong. No.A. Jurnal Teknologi (D). 54-64 [9] Blanco R. Proceedings of the Second European Workshop on Data Mining and Text Mining in Bioinformatics. 2002 [24] Runxuan Zhang. I ˜naki Inza and Pedro Larra˜naga. Aitken S: Feature selection and classification for microarray data analysis: Evolutionary methods for identifying predictive genes. January-march 2007. A review of feature selection techniques in bioinformatics. [17] R.pages255-268. Accurate Cancer Classification Using Expressions of Very Few Genes. 2001 [3] J.J.Krebel. [14] Yeo lee chin.Scholkopf . 141 . IEEE/ACM transactions on Computational Biology and Bioinformatics. Two-stage classification methods for microarray data. Statistical Science. 2001 [19] Mingjun Song. 9 2003. Pacific Symposium on Biocomputing 8:53-64(2003) [4] Li Y. 43 (D). 2003 [26] Elena Marchiori. Missing values estimation methods for DNA Microarrays. pp. Bayesian learning with local support vector Machines for cancer classification with gene expression data”. Bioinformatics. A survey of parallel algorithms for classification. Narasimhan Sundararajan. Keller. ”Multicategory classification using an Extreme learning machine for microarray gene expression cancer diagnosis”. Bayesian automatic relevance determination algorithms for classifying gene expression data. Safaai Deris. Sanguthevar Rajasekaran. ISSN 0127-9696 [15] Andrew D. Larranaga P. 6:148. Science Direct. Cambridge. Hitoshi Iba. First Asia pacific Bioinformatics conference. Statistical methods for ranking differentially expressed genes Molecular Sciences. Tipping M. IEEE/ACM Transactions on computational biology and bioinformatics.IEEE [21] Elena Marchiori. International Journal of Pattern Recognition and Artificial Intelligence 2004.Advances in Kernel Methods –Support VectorLearning. no.1999.C. 111-124.1186/1471-2105-8-370. Friedman N. vol.” Machine learning in DNA microarray analysis for cancer classification. Ruzzo. Schummer M. 8:370doi: 10. Selecting Informative Genes with Parallel Genetic algorithms in Tissue Classification.Tissue classification with gene expression profiles. Bioinformatics 2002. Hastie. LNCS. Vol. Feng Chu. Bioinformatics 2005. Genome Informatics 12: 14–23 (2001). Lowey J. Chu. Class Prediction by Nearest Shrunken Centroids with applications to DNA Microarrays. Classification of multiple cancer types by Multicategory support vector machines using gene expression data.

Even worse. Diagram of the basic system System Functions In the production process of blast furnace. SYSTEM FUNCTIONS AND HARDWARE DESIGN sensor annunciator executor ADAM4018 ADAM4050 ADAM4021 ADAM4520 host Figure 1. and realize to manage the historical data. band rate-setting. The application of sensors should consider two factors: the site environment and the measurement request. we can use the S-type temperature sensor. China zdhtshufen@buu. of which the data amount is also great. In addition. work stability and RS-485 communication function. detection.cn Liu Zhihua Beijing university of chemical and technology Beijing. the 978-0-7695-3521-0/09 $25. The actuator is the drive circuit used to drive the regulating valve.1109/ICCET. High-performance computer is used in the host computer. record and storage the measuring information. The remaining temperature sensors adopted K-type. furnace I. Keywords-LabView. So it is very suitable for the measurement objects. It has been widely used in various fields. the executive drives and actuator.com Abstract—This article mainly introduces a blast furnace measurement and control system. All these make it difficult to measure and control the condition of blast furnace. It is the integration of computer and bus technology. and more a friendly visual simulation interface. II.2009. The system can replace the massive display instruments. Since hot-air and the furnace bottom temperature is high. Instrument. finish monitoring the furnace conditions in the production process. the serial COM2 was used and the band rate system was set as 9600b/s.com. Its temperature ranges from 500 to 1750 and it can convert signal below +22mV. LabView is a virtual instrument development platform for many fields.1 Iron making of Tang gang as the design model. China Liuzhihua714@sina. As the site signal is complex and multi-species. The hardware takes the ADAM4000 series data acquisition module of Advantech as the core. The ADAM4000 series signal acquisition module is characterized as the acquisition of many kinds of signals. which is developed by the American NI Corporation. such as: fault diagnosis. acquisition card. process control. Hardware design The measurement and control system is mainly composed of sensors. Data Acquisition Module. A. It has a simple graphical programming environment and powerful hardware-driven features. which based on the virtual instrument development platform LabView. equipment design and so on[1]. printer. it also needs to show. to ensure the quality and safety of the production. computer. the pressure and other physical quantities. browsing and report printing functions. In this system. measuring transmitter. This system takes the blast furnace system of the No.2009 International Conference on Computer Engineering and Technology Study and Experiment of Blast Furnace Measurement and Control System Based on Virtual Instrument Li Shufen Beijing Union University Beijing.00 © 2009 IEEE DOI 10. The ADAM4000 series of data acquisition module developed by Advantech can accept the standard signal from the sensor so as to omit some of the transmitter.242 142 . such as the temperature. Diagram of the basic system components is shown in figure 1. INTRODUCTION Virtual Instrument is a new technology which is developed in the 1990s. the Furnace Condition This system has the following functions: system parameters setting function Real-time measurement and display function Real-time control function Sound and light alarm function Data storage. microelectronic. the data records have computer records and manual records. B. the operating mode has manual control and PLC control. it is necessary to measure all the signals of each furnace point. the amount of signal acquisition is large. measurement and control technology.

and then notify the workers what the reason is. I have also met some problems in the experimental debugging process. if directly applying the system. The reasons are: a The module address is duplicated. data management and other functions. b Serial band sate is not consistent with the module band rate. 2) Module operates normally. while in china the traditional instruments are still in the separation from computer. location and other signals can be shown.and set the representative parameters(such as: there are four signals T1. even with special symbols. IV. we can output the control signals. Storage signals have also taken the ways of representative parameters. And it can also display the scattered data in a centralized way in the function of data analyze. d Enter wrong direction.the equipment control. 4. These measures can make it convenient to query. it will be very conducive for the . 3. 3. display the parameters of furnace points: the workers can know all points’ pressure. DATA PROCESSING Through displaying each point’s acquisition data. e The impact of other procedures. flashing alarm and manual operation: the alarm will be operated if any of the parameters of measurement points exceed the set limit. B. humancomputer interaction. and procedure will record the furnace conditions every 10 seconds. Level of Software Function According to the practical production. For example. including setting parameters. Software system will read the key parameters. while 4050 digital input and output module set the working methods as “40”. it has been occupying the serial. signal acquisition and display. if there is an error on the hardware module address. VI. data management: the operators need to register in the display interface before they began to work. after filtering and giving PID operations to the acquisition signals. the corresponding parameter will have an error too. analysis and investigate the production data. take T1 as representative parameter ) as the alarm limits. the level of software modules is based on the problems which may happen on the production process and the requirement for production management. if necessary. So it must have a quite accurate simulation system to complete the debugging system before running in the field. Meanwhile. and in the event of fault or when it alarms the current data can be stored and printed. flow. software function 1. V. b The signal source for the data acquisition module has hardware error. T2. T3. we must finish the hardware setting as the requirement and complete the hardware 143 Based on the virtual instrument development platform. temperature. the site temperature. Output of the module was automatically selected. Therefore.single-function. Through the compensation lead. T4 of the furnace top temperature. but the computer detects no module. the reasons are: a Data acquisition delayed too long b Interruption in data collection process c The data acquisition module which needs to scan responses disordered. A. there will inevitably cause great loss. flow rate and the furnace condition through the host computer screen. Before operating the system. In each of the data records. Everyday these data will form a daily document. the followings are some solutions: 1) 4250 operates normally and hardware connects correctly. the reasons are: a The ADAM driver is not closed. the measurement and control equipment can significantly improve the testing efficiency. 4) System running very slow.respectively. 3) ADAM drivers can find the modules.4021 analog output module set output range 0~5V to control regulating valve. Virtual instrument has already got a very universal application in developed countries. 2. but the collection data shows abnormal. CONCLUSION It is a key process to operate and debug software for the system. III. c No power supply for the signal source. we can switch to manual operation to directly adjust output signal for emergency. pressure. increase testing functions and improve system performance and to improve the accuracy and precision control. and then to adjust the parameters such as temperature and pressure to achieve the objective of adjusting furnace condition. In the practical production. then control the air valve opening. temperature sensor directly connected to the analog input module 4018. compare with the settings and execute PID calculations then output the adjusting signals.temperature ranges from 0 to 1000 and it converts signals below +50mV. The reasons are: a Error on interception of character or operation of the software program. So the software mainly has the following functions: 1. it contains both the workers’ number and their working time. 2. set the system alarm limit : in order to ensure highquality products and safety production. b LabView procedure has errors. we can set the quality assurance and security assurance of the relevant parameters . we can use the key parameters to complete the control . which can save a lot of space to the data files. but LabView procedures can’t detect any modules[3]. MAJOR TECHNICAL PROBLEMS AND SOLUTIONS SOFTWARE DESIGN detection because this will affect the stability of the system. So it is very important to correctly set up the hardware and software[2]. and this will cause serious consequence to the production.

LabView programming and application. The status and development trend of virtual instrument software platform technology. Instrument technology and sensor. REFERENCES . XuYun. 2002(1). [1] [2] [3] ZhaoYong. Electronics Industry Press.. And it will greatly improve the production efficiency and reduce the production costs. 2002(1). Foreign Electronic Measurement Technology. 144 .development of china’s measurement and control technology to use virtual instrument technology to transform traditional equipment. YangLeping. Virtual instrument data acquisition based on serial communication.

Kolkata. where the system is either to attain maximum aggregate throughput or to provide fairness among the users or to have a trade-off between them. So. affected the most with degraded service quality. India arijit. it is quite likely that QoS provision should play a major role in deciding resource allocation and should be coupled together. It also considers the time diversity gain achieved in long-term computation of Proportional Fair (PF) metric. PF optimization problem as instantaneous maximization of the sum of logarithmic user data rate in multi-carrier systems is considered in [4]. The principle of the resource allocation taken in [3] is to allow the user with least proportional capacity to use that particular subcarrier.00 © 2009 IEEE DOI 10. minimum delay.2009. it is quite obvious that mean QoS guarantee in long term for resource allocation would provide better performance. The proposed PILTPF algorithm attempts to allocate the OFDMA subcarriers to the users to achieve its minimum mean data-rate requirement within few frame duration with a degree of 145 . Our proposed scheme dynamically assigns OFDMA resources to the users to optimize the overall system performance in heterogeneous traffic and diverse QoS based WiMAX systems. QoS. So an integrated scheme to simultaneously satisfy the individual QoS (user requirement) as well as provide system performance maximization (service provider requirement) is very much required. Premium users are the users with higher priority. Proportional Fair 978-0-7695-3521-0/09 $25. while implicitly maximizing the system throughput in a greedy manner. From this perspective we propose an efficient resource allocation scheme with integrated QoS provisioning by evaluating and assigning priority index to each user and a biased optimization based on the assigned priority index. low latency. longterm fairness. and the power should be allocated using the water-filling algorithm across the subcarriers. In order to be practically feasible. However. The optimization objective and algorithm developed so far for dynamic resource allocation in OFDMA systems mostly consider instantaneous gain in system performance. INTRODUCTION The next generation broadband wireless applications require high data rate. Simulation results show considerable improvement of performance due to priority based resource allocation with long term PF calculation with respect to traditional PF algorithm. user satisfaction of a wellserved user increases only marginally by increasing the QoS level even further. previous research work [5] proposes low complexity suboptimal algorithms. which selects a user-carrier mapping where the logarithmic sum of the user is maximized.1109/ICCET. which is dependent on user’s derived priority profile. Jaydip Sen. the satisfaction level drops significantly. in short highly demanding Quality of Service (QoS). A good resource allocator or scheduler generally depends on the business model of the operator. Keywords— WiMAX. we have introduced priority indexed long-term (PILTPF) resource allocation algorithm. specifically for delay-tolerant traffic. scheduling I. each subcarrier should be allocated to the user with the best gain on it. It is very much computationally expensive to satisfy each user’s instantaneous data rate requirement in an optimum way. Another degree of freedom in QoS guarantee is the priority of the user. Optimization approaches basically attempt to dynamically match the requirements of data-link connections to the physical layer resources available to maximize some system metric. Considering the channel dynamics and the fine granularity of OFDMA systems in both frequency and temporal domain. which is not what the service provider would like. who enjoy the privilege to get the best and uninterrupted service even if it has bad channel condition. Saltlake-91. OFDMA resource allocation. Large gain in throughput can be achieved through multiuser raw rate maximization by exploiting multi user diversity (MUD) but simultaneously fairness must be guaranteed. QoS guarantee is an important feature. the limited system resources can not be properly utilized. which dynamically allocates the OFDMA resources to the users to meet their QoS requirement. Debasish Bera Wireless Innovation Lab. But the approach of resource allocation based on instantaneous guarantee of QoS does not consider the temporal diversity of mobile wireless channel. PILTPF algorithm emphasizes on individual user’s true priority in allocating system resources in order to maintain as well as optimize the requirements of different QoS classes. Mostly.2009 International Conference on Computer Engineering and Technology A New Optimization Scheme for Resource Allocation in OFDMA based WiMAX Systems Arijit Ukil. It was shown in [2] that in order to satisfy the objective of maximizing the total capacity. if the QoS level is declined below some threshold. By assigning considerable resources to premium users like UGS class in WiMAX. bandwidth and transmitter power are intelligently used and properly optimized. time-diversity gain.com Abstract— In this paper.10 (PF) based optimization heuristically tries to balance the fairness among the users in terms of outcome or throughput. real-time applications. which cannot be realistically provided unless the limited system resources.ukil@tcs. which provides priority estimated resource allocation with individual QoS provisioning. Tata Consultancy Services BIPL. To achieve this goal. existing in the current and next generation broadband wireless networks like WiMAX. an efficient scheme to optimize resource allocation for dynamic OFDMA based WiMAX systems is presented.

maximum latency and QoS class of the kth user: 146 . Simulation results also show the performance improvement by PILTPF algorithm over unbiased conventional Proportional Fair algorithm. the signals suffer from AWGN noise. path loss and shadowing with channel gains H = { hkn }. PILTPF algorithm and the concept of priority indexing of the users are presented in section IV. QoS and system optimization in order to satisfy the requirement both user and the service provider. namely. Signal to Noise Ratio (SNR). belonging to user k. in WiMAX there exists five (M=5) QoS classes. 1. The paper is organized as follows. Let the available QoS classes be Qm . Let Rknt = f( hknt )be the N instantaneous achievable rate for kth user when nth subcarrier is allocated at tth time instant and Rkt is the achieved data rate of the kth user at allocation epoch t and Rkt is expressed as. IMT-A.biasness to the higher priority users in heterogeneous traffic condition. which has Gaussian distributed with zero mean with psd N0 and total noise power is assumed as N T . Then Rkt becomes: Rkt = ∑ρ n =1 N ktn B N   2  hknt × Pkn   x ∆ gap x log 2 1 + B   NT ×   N   (4) Hz and in order to overcome frequency selective fading si is chosen to be sufficiently smaller than the coherence bandwidth of the channel. then ρ knt equals to 1. LTE. A single cell downlink wireless cellular network with one base station (BS) serving total K users is considered. otherwise 0.IMT-A for its simplicity and mean QoS guarantee feature. ………. Subcarrier allocation epoch is assumed to be less than the channel coherence time. UGS and ertPS class users are given the highest priority and should be satisfied with their QoS requirement all the time. minimum data rate requirement. PI k denotes the priority index of kth user and PI k is function of the maximum buffer size. if the nth subcarrier is assigned to kth user at tth time instant. PILTPF algorithm is simplified by equally distributing the total available transmitted power as performance can hardly be deteriorated by equal power allocation to each subcarrier [6]. PILTPF algorithm is very much suitable and practical for wireless broadband systems like WiMAX.6. Each OFDMA subcarrier n. Simulation results roughly based on Mobile WiMAX Scalable OFDMA-PHY and analysis of the results are presented in the next section. s2 …. As shown in Fig. Rkt = B N ∑ρ n N knt f (hknt ) (1) generation wireless standards like WiMAX. where 1≤m≤M. In addition. conclusion and future scope of work. SNR gap in simplified [5] term can be approximated as ∆ gap = -ln(5BER)/1. SYSTEM MODEL AND PROBLEM FORMULATION B   n =1 NT ×   OFDMA is the standard multiple access scheme for next N   (2) ∑∑ ρ n =1 k =1 N K knt =N (3) where ρ knt is the subcarrier assignment matrix at allocation epoch t. Total available Transmitter Power is P. γK] in bits/sec. Multiuser OFDMA system architecture with Subcarrier Allocation and Dynamic Priority Index Estimator module is shown in Fig. OFDMA resource allocation algorithms dynamically assign mutually disjoint subcarriers to the users by taking the advantage of MUD to meet the some specified system performance objective with the help of knowledge of the channel condition available from channel state information (CSI). The QoS of the kth user is described by the minimum individual rate requirement = [γ1. OFDMA subcarrier allocation architecture consists of three components. which is equal to . Pkn is the transmit power for nth subcarrier when transmitted P to kth user. the the subcarriers are denoted as s1. Priority Index Estimator calculates the unique priority of each of the user based on maximum buffer size. each of which requires certain QoS guarantee. nrtps. In practice the achieved data rate is less than that of what equation (2) suggests as there exists a few dB SNR gap. In section II system model and problem formulation are presented. Priority Index Estimator and Scheduler. OFDMA is basically Multi-user-OFDM. is subject to flat fading. Section VI provides the summary. The total available bandwidth B is partitioned into N equal narrowband OFDMA subcarriers..sN. γ2. 1. maximum tolerable delay and QoS class of the user and assigns each user with a unique priority index. For an example. OFDMA is characterized by a fixed number of orthogonal subcarriers to be allocated to the available users [1]. where sn =   2  hknt × Pkn   f( hknt ) = log 2 1 + B   NT ×   N     2  N hknt × Pkn   Rkt = ∑ ρ kn x log 2 1 + II. UGS. ertps. The interference from adjacent cells is treated as background noise. Perfect channel characteristic is assumed in the form of channel state information (CSI). In section III Unbiased Instantaneous Proportional Fair Optimization problem is discussed.. Subcarrier allocation Module. The objective of the PILTPF algorithm is to simultaneously provide long term fairness. LTE. rtps and BE.

subcarrier allocation module does the frequency domain mapping of the user to the subcarriers. IV. like nrtps. The simplest way of incorporating user priority is to give the chance to the highest priority user to get allocated by the best of the subcarriers. Long term optimization for system performance improvement yields time diversity gain. δ k ) (5) k* = arg max knt where bs k is the maximum buffer size and δ k maximum Rkt tolerable delay. Wireless channel is normally very much dynamic in nature. PILTPF has two parts and operates sequentially. The above equation (7) is the optimal PF subcarrier allocation. Traditional PF can only support a loose delay-bound QoS requirement. which relaxes the strict priority and reflects the true priority. γ k . minimum data rate requirement to a priority value. bufferK T r a n s m i t t e r The optimization and sub-optimization schemes (6-10) discussed so far deals with maintaining QoS guarantee instantaneously and without any provision to provide privileged service to the higher priority users. inherent to the nrtps and BE class of traffic. The mean data rate achieved ( Wirel ess QoS1 Priori ty Index Estim ator P I k Subcarri er Allocatio n CS I {h kn} Rkt ) is computed as Fig. 7]. UNBIASED INSTANTANEOUS PROPORTIONAL FAIR OPTIMIZATION Proportional Fairness (PF) in OFDMA system maximizes the sum of logarithmic mean user rates [5. PF calculates achievable data rate of individual user instantaneously (8-9). in each frame. PF scheduler maintains fairness by the law of diminishing return. The advantages incurred by considering the parameters like maximum buffer size. But that will be the strict priority based subcarrier allocation. 1. III. how to schedule the packets for transmission. 1) with the help of PILTPF algorithm allocates OFDMA subcarriers to the users. User1 S c h e d u l e r (8) k Rkt = (1 − 1 1 ) Rkt −1 + Rkt −1 ∆τ ∆τ (9) The constraint of the system in order to provide QoS is: Rkt ≥ γ k ∀k (10) Subc arrier 1 … User… K buffer1 Subc arrier N QoS K.R PI k = f (Qk . maximum latency should be taken in to account. Priority index is already introduced in equation (5). scheduler does the time domain mapping of the users to the time-slots. The priority of the user. Basically. This kind of subcarrier allocation may deprive the higher priority user with bad channel condition to maintain its QoS. buffer-size. BE class traffic in IEEE 802. First part estimates the user’s priority based on (5) and the next one is subcarrier allocation. To take full advantage of the time-diversity gain. ∆τ is average window size or the period between successive allocations and Rkt is the average data rate achieved of the user k at the preceding allocation instant. Priority based Resource Allocation module PF t = max ∑∑ ln R kt n =1 k =1 N K (6)  ∑ Rknt    n∈N (7) PF t = ∏ 1 +  (∆τ − 1) Rkt  k∈K    where PF t is the proportional fairness index at tth instant. 147 . PF is also not the most optimized solution for delay-tolerant applications.16. PILTPF algorithm exploits the time diversity as well as considers the priority of the user by calculating priority index based PF metric in long term to meet the constraint of minimum rate requirement. which is discussed in the next section. bs k . PRIORITY BASED PROPORTIONAL FAIR OPTIMIZATION AND ALGORITHM Unbiased instantaneous Proportional Fair Optimization as described in (6-10) is based on instantaneous computation of proportional fair metric ( PF ) and the resultant optimization does not consider the time diversity and priority of the users. which is basically the monotonically increasing assurance of QoS. which demands modification of the traditional approach of PF optimization to handle diverse QoS based traffic. Priority Index Estimator considers system constraints and based on user’s QoS class maps the QoS metric of delay. Traditional PF optimization (6-10) does not consider user’s priority and treats every user equally and unbiased way. The scheduler decides. The suboptimal form [7] is where subcarrier n is allocated to k* user when Equation (5) can be more generalized when PF is defined as follows [6]: moving average to imply a notion of low pass filter [7] for the purpose of providing fairness. which is not suitable for real-time multimedia services where the delay bound QoS requirement is stringent. Subcarrier Allocation Module (Fig. should be taken into account to provide the premium users’ QoS guarantee. unique to each users. but can not be implemented due to its high computational complexity. a QoS class-biased proportional fair optimization has to be introduced.

NDC traffic. δ kMAX . For. This (11) can also be termed as urgency based priority assignment. which is termed as strict priority. available buffer–size. The optimization is modified PF. It can be noted in traditional resource allocation. which converts the traditional PF optimization as weighted PF optimization.i. find next best k. Subcarrier allocation part assigns the appropriate subcarrier to the user to optimize the system performance. δ t are the maximum buffer-size. Tk t =1 exploits the time-diversity gain of each and every users. It exploits the TD gain whenever possible. Higher the priority of the user. where as. which reflects the true priority of the user and it is highly dynamic. Equations (11-15) bskMAX . PI is highest for UGS class and for rest PI is estimated as below: priority of the user is estimated by PI k . else continue Step6: t= t+ T f and go to Step 2 V. which justifies the intuitive interpretation that the expected value of a random variable is basically long-term average when sampled repeatedly and can well be considered as independent and identically distributed (i. time diversity gain becomes high and the mean channel condition ( Rkn ) follows similar distribution according to the Bernoulli’s Law of Large Numbers. where ε is a small number. Step5: if E ( Rkt ) t =1 ≥ γ k . the problem is purely proportional fair. δ MAX − δ t = 1 . (12) Based on the optimization scheme (11-15) the proposed PILTPF algorithm is described as below: T (13) Step1: Set initial mean achievable data rate: E ( Rkt ) =ε t =1 The constraint of optimization utilizes the TD gain. Subcarrier allocation module selects k* user to assign n-th subcarrier based on: Above equation is the condition of absolute convergence to the QoS requirement with probability one. all the QoS related PHY and MAC parameters are taken into account. The system parameters are roughly based on Mobile WiMAX Scalable OFDMA-PHY. If Tk = T f . it assures long-term average Step 2: Find PI k . ∀k . instantaneous data rate guarantee. SIMULATION RESULTS AND ANALYSIS To investigate the performance of PILTPF algorithm. Frequency Reuse factor of 1 is taken T Rkt in a window of Tk and allocates the subcarrier to the user in a way to maintain its minimum average data rate requirement within Tk time duration (14). which PI k × Rknt k* = arg max k Rkt where. More the value of Tk . Instead it calculates both the PF Step4: Find the mean data rate achieved by the kth user at tth instant. when bs kMAX − bsk t Lim P (ω k → γ k ) = 1 or δ MAX − δ t → 0 Equation (11) is based on the notion that user’s priority will increase as the urgency for user’s allocation to resources becomes higher. priority mostly depends on the value Qk only. from (13) that more time diversity gain would be achieved with the increasing value of δ kMAX →∞ describes PILTPF optimization scheme. ∀k as per (11) data rate. bsk t . It can also be noted used buffer-size.Priority estimation evaluates the dynamic priority of the user based on its current QoS class. This is due to the fact that in wireless mobile environment over long duration. Instead of and t=0 . which attempts to avoid QoS violation to the maximum extent as well as minimize the outage probability. δ MAX .d) random variable with mean µ k . δ kMAX →∞ Lim TDgain = µ k PILTPF optimization utilizes this performance gain for its attempt to converge to γ k (14) by relaxing the QoS constraint of minimum data rate as statistical mean value. The 148 . in fact. in priority index estimation function (11). Step2: Find the user k* as per (12) for all the subcarriers Tk (14) Step3: E ( Rkt ) t =1 ≥ γ k Calculate the data rate achieved as Rkt = E ( Rkt ) t =1 Tk Tk = ∂ kMAX moving average value of metric and (15) PILTPF optimization as described does not depend on the per Rkt = ∑ρ n =1 N knt × Rknt for all the users at tth instant Rkt nor does the PF metric is calculated instantaneously. Then theoretically. It can be clearly seen that: PI k → ∞ . better optimization can be obtained. delay limit and minimum data rate requirement. Maximum system optimization is obtained when all each user’s allocation is complete only when t = δ kMAX . higher the magnitude of PI k and more is the chance to get the subcarrier allocated even in bad channel condition and high PI k = where (bs kMAX Qk × γ k − bs k t × δ MAX − δ ) ( t ) (11) Rkt . the more ergodic the optimization scheme becomes and with higher granularity in T f . simulation results under the system parameters and simulation scenario given in Table 1 is presented in this section. where T f is the allocation instant. maximum delay-limit and elapsed delay for kth user at current allocation instant (t).

then the allocation is very much unacceptable from service provider as well as user perspective. “Fairness and Throughput Analysis for Generalized Proportional Fair Frequency Scheduling in OFDMA. 3 a bar chart comparing the performance of PILTPF and PF algorithm is depicted. PILTPF is of very practical importance and can well be implemented for next generation broadband wireless systems like LTE. the difference between the PF and PILTPF diminishes. In Fig. If the deprived user.”. The unevenness in assigning subcarrier by conventional PF algorithm becomes more visible when more number of users exists. [2] Rhee et al.5 MHz Maximum Doppler 100Hz VI. buffer-size. 4. IEEE ICC. 2 clearly shows that PILTPF algorithm follows QoS profile for all the users where as in PF algorithm some of the users get deprived and achieved very less data rate than their minimum requirement due to the inherent feature of its instantaneous PF metric computation.pp. it is depicted in Fig.3. 210-212. “A Proportional Fair Scheduling for Multicarrier Transmission Systems”. Fig. Nov. 3 and 4. 2. [4] Hoon Kim and Youngnam Han.. vol. et al. IMT-A to improve the overall system performance. no. [3] Z.6 Channel model Rayleigh Modulation 16QAM Frequency reuse factor 1 Channel sampling frequency 1.25 MHz Number of users 20/30 Number of sub-carriers 72 BER 10-3 SNR_gap -ln(5BER)/1. [7] Christian Wengerter. ertps. et al. Majid R. A random heterogeneous mix of UGS. vol. 4. From Fig. in a less number of users and good channel condition throughout the entire cell. SUMMARY AND CONCLUSION We have proposed an efficient but simple OFDMA resource allocation algorithm. 5126-5131. 2726–2737. In that highly complex scenario of large number of users. which has shown the characteristics of better performance and QoS guarantee in long-term than the conventional suboptimal PF algorithm in QoS diversified heterogeneous traffic condition.” IEEE Trans. Chart comparison between PILTPF and PF Fig. Shen.so that all the subcarriers can be assigned to the users. say 10th user (fig. pp. no. Springer.3. pp. Jan Ohlhorst. IEEE Communication Letters. minimum data rate are taken for simulation purpose. pp. Wireless Communication. User Achievable throughput by PILTPF and PF algorithm when number of users = 30 REFERENCES [1] Ahmad R. 1085–89. 2005. pp. Fig. WiMAX. Simulation results are shown in Fig. 6. Vol. BE traffic with varying QoS metric like delay. 2005. which is assumed to be less than coherence time of the channel. Subcarrier allocation instant is taken as equal to be equal to the frame duration (5ms). pp. 2007. It also shows the minimum data rate requirement of individual user as QoS profile.1903-1907. 3. IEEE VTC. 2nd ed. 2. Ardestani.. 2 and 3. March 2005. Fig. it can be interpreted that as PILTPF considers both the priority of the user and the time diversity gain.ertps. TABLE I Simulation Parameters Available Bandwidth 1. User Achievable throughput by PILTPPF and PF algorithm when number of users = 20 149 . nrtps.4 that PILTPF at least attempts to follow the QoS profile in order to preserve the importance of priority of the users. IEEE VTC. “An Improved Low-Complexity Resource Allocation Algorithm for OFDMA Systems with Proportional Data Rate Constraint”. “Multi-Carrier Digital Communications Theory and Applications of OFDM”."Efficient Adaptive Resource Allocation for Multiuser OFDM Systems with Minimum Rate Constraints". “Adaptive resource allocation in multiuser OFDM systems with proportional rate constraints. 2) is a high priority customer. May 2000. 2007. so it results in better performance both in terms of throughput and QoS guarantee. [6] Wei Xu. 606-611. Alexander Golitschek Edler von Elbwart. 2 is showing the comparison of achievable data rate of the users by PILTPF and PF algorithm.9. However. “Increase in capacity of multiuser OFDM system using dynamic subchannel allocation”. Fig. [5] Abolfazl Falahati. ICACT. et al.

Changchun Institute of Applied Chemstry Chinese Academy of Sciences. This kind of algorithms are termed supervised learning algorithm [1]. Collecting negative training examples is especially delicate and arduous because (1) negative training examples must uniformly represent the universal set 978-0-7695-3521-0/09 $25. For instance. 130012. First. This paper has two main contributions. the proposed algorithm especially performs well in situations where given positive dataset P is insufficient. Thus.edu. 2. The behind rationality is that purifying the unlabeled data by filtering out a small fraction positive data is much easier than exploit a small but reliable negative dataset. JiLin University. Those data picked out can be supplied to positive dataset P. a general binary text classifier is built by employing some algorithm on the positive dataset and negative dataset.2009 International Conference on Computer Engineering and Technology An Integration of CoTraining and Affinity Propagation for PU Text Classification Na Luo1. affinity propagation (AP) approach attempts to pick out the strong positive from likely positive set which is produced in first step. China 3 State Key Laboratory of Electroanalytical Chemistry.com. Introduction Traditionally.130117.cn. PU problem is transformed to supervised learning problem. Our second contribution is the employment of affinity propagation (AP) on likely positive data for expanding positive dataset. China 2 Department of Computer. Wanli Zuo1 1 College of Computer and Science and Technology. CoTraining iterates to purify unlabeled dataset by filtering out some likely positive examples.cn. in order to construct a ”homepage” classifier. Related Works . Traditional supervised learning algorithms are thus not directly applicable because they all require both labeled positive and labeled negative documents to build a classifier. yuanfuyu@yahoo. CoTraining is employed for filtering out the likely positive data from the unlabeled dataset U. Second. sample of nonhomepage should represent the Internet uniformly excluding the homepages). JiLin. Fuyu Yuan3 . this paper originally proposes a three-setp algorithm. Section 5 will briefly make some conclusions. Finally.3] exploiting reliable negative dataset. 1. The first is the proposal of employment CoTraining for purifying unlabeled dataset. Unlike former PU algorithm [2.00 © 2009 IEEE DOI 10. Changchun. the likely positive data being filtered out can also be made use of supplementing positive dataset. a linear One-Class SVM will learn from both the purified U as negative and the expanded P as positive.2009. wanli@jlu. one needs to collect a sample of homepages (positive training examples) and a sample of non-homepages (negative training examples).g. The remainder of the paper will be organized as follow: Some related works is presented in section 2. Changchun. For above reason.cn Abstract Under the framework of PU(Positive data and Unlabeled data). which could be detrimental to classification accuracy. a linear SVM will train both on the purified unlabeled dataset as negative examples and the expanded positive dataset as positive examples.2.131 150 excluding the positive class (e. JiLin. Because of the algorithm's characteristic of automatic expanding positive dataset. Moreover. A number of comparative experiments have been made in section 4. and (2) manually collected negative training examples could be biased because of human’s unintentional prejudice. China Luon110@nenu. A comprehensive experiment had proved that our algorithm is preferable to the existing ones.1109/ICCET. 130022. Details about the proposed algorithm can be found in section 3. Northeast Normal University.edu. However. Finally. PU classification has been an important problem.

former algorithms used to first assigned negative "pseudolabel" to all unlabeled data and then train on it together with positive dataset. Specifically. then using these distinguishing words to obtain reliable negative set (RN). Liu Bing employs a weighted logistic regression to PU and originally defines a measure on unlabeled dataset. and the purified unlabeled dataset is denoted as pU. The two base classifiers "help" or "co-purify" each other by filtering out likely positive examples from the unlabeled dataset of the counterpart. L represent the likely positive dataset which is filtered out from U. R={}. Algorithm 1: CoTraining Algorithm Input: (P.1. subject to U1∪U2=U. (2) Employ affinity propagation on likely positive set for supplementing positive dataset. Specifically. which exemplar it belongs to. Unlike other methods. So far as we known. Obviously. This embarrassment can be indeed avoided by out CoTraining algorithm which use two individual classifier trained on two different datasets to purify U iteratively. d∈L} 18: pU=U1∪U2 3. PEBL is another two-step approach involving in PU [3]. this paper had proposed both the adoption of CoTraining for purifying unlabeled dataset and affinity propagation (AP) for expanding positive dataset. Messages can be combined at any stage to decide which points are exemplars and. affinity propagation is adopted. i=0. d∈U1} 7: U1=U1-Q 8: R={Classify(S1i.d)==positive and Classify(S2i. there is no existing algorithm involving with supplementing positive dataset. CoTraining consists of two individual SVM learners (base classifier) which are built on the same positive dataset and the different unlabeled datasets. unlabeled dataset is U. Remarkably. Combining CoTraining framework with PU is first proposed by Blum and Mitchell [4]. Affinity Propagation Step For the purpose of expanding positive dataset. sent from data point i to candidate exemplar point k. two kinds of message are exchanged between data points.U) Output: (L. arbitrary document is d. such classifier is trained and classifying on the same unlabeled dataset. The result classifier is used again to assign "pseudo-label" to unlabeled data.d)==positive. 5: while(true) 6: Q={Classify(S2i. It defines a PAC learning model from positive and unlabeled statistical queries. The advantages of affinity propagation clustering[6-9] over other clustering methods lie in that it’s more stable for different initializations. classifier is S. reviewed several twostep PU approaches and proposed his Biased-SVM [2]. each of which takes into account a different kind of competition.d) signify the label that S assigned to d. reflects the accumulated evidence for how well-suited point k is to serve as the exemplar for point i.d)==positive. In algorithm 1 positive dataset is denoted by P.Probably Approximately Correct (PAC) learning from positive and unlabeled examples is proposed.pU) 1: Randomly split U set into two set U1. the proposed algorithm can be divided into three steps: (1) Purify unlabeled dataset with CoTraining by filtering out likely positive set. U1∩U2={} 2: Build SVM classifier S10 with P as positive and U1 as negative 3: Build SVM classifier S20 with P as positive and U2 as negative 4: L={}. it attempts to explore the distinguishing words between positive and unlabeled corpus.k). taking 151 . Theoretically. The Proposed Algorithm As mentioned in introduction section. (3) Perform a linear SVM on the purified unlabeled dataset as negative and the expanded positive dataset as positive. Classify(S. 14: Build SVM classifier S1i with P as positive and U1 as negative 15: Build SVM classifier S2i with P as positive and U2 as negative 16: end while 17: L={Classify(S1i. Q={}. In affinity propagation clustering. Another comprehensive study on Co-Training is made by Nigam and Ghani [5]. for every other point.2. this classifier is not likely to assign a confident label. Liu Bing etc. U2. 3. CoTraining Step In order to obtain a reliable negative dataset. d∈U2} 9: U2=U2-R 10: if(Q={} and R={}) 11: break 12: end if 13: L=L∪Q∪R.d)==positive. 3. i=i+1. The “responsibility” r(i. the way CoTraining differs from traditional CoTraining [5] is that the two basic classifiers are based not on two different feature spaces but on two different training datasets of the same feature space.

and L is likely positive dataset. which represents the meaning of the document. j i.3. pU is the purified unlabeled dataset. j where α ∈ [0. k ) ← min{0. Currently. k )}}. In this paper we had improved the original algorithm to form the document vector of positive examples by above method of semantic feature s (k . we can say that these features represent this category greatly [10]. As reported in the literature. The algorithm provides the method applying semantic extraction to form positive document vector for PU problems. To begin with.k) can be set to be the negative Euclidean distance. The “availability” a(i. For all i ≠ k. In the process of semantic feature extraction. we set the initialized value of s(k. we assign 1 to positive label and -1 to negative. Then. However. k ) ← s (i. and the responsibilities are initialized as r(i. To quantify the labels. In this paper. Next. k ) ← ∑ max{0. s (i.j) to their maximum maxijs(i. If two features in one category have a common synonym set. multi-sense words are disambiguated and selected by correct semantic. r (i ' .1] .k) varying from minijs(i. The principle of minimizing the structure risk enables SVM to be suitable for high dimension text data. j ) + α ( max s (i .k). j )) i. positive dataset is expanded with sP. and ensure the later classifier.k)=0. j ) + α ( max s (i. k ' ) + s (i. SVM is the most popular algorithm in text classification. while for all i=k.k) is set as the median of the input similarities (resulting in a moderate number of clusters) or their minimum (resulting in a small number of clusters). i ≠ k ' end if end for aP = sP ∪ P. One-Class SVM algorithm uses only the positive set to train the classifier. 2: for each d ∈ T 3: Rank documents in pU and P according to its similarity to d. 3. First. j i. set from candidate exemplar point k to point i. documents meaning found out the features which formed documents vector.k) is a varying parameter. r (i .d)*(1). Where s(i.j).into account other potential exemplars for point i.k) for all ks are set to be equal to each other because all data points are equally suitable as exemplars.k.pU. In this way. j i. k ) = min s (i . s(k.k). the true number of clusters may be a widely changeful value. Algorithm finds the semantic for all the words in the positive set.k) reflects the similarity between the data points i and k. The words corresponding to these semantics construct the important features represented positive examples. j 152 . so they can represent the document meaning greatly. However. So in our design. j )) i. 11: end if 12: end for 13: if(sum>0) 5: 6: 14: 15: 16: 17: r (i.d)*(-1). Forming the vector of positive document by the words is to avoid losing the meaning of document with reducing the dimension of document vector. and this technique is especially suited for situations where the given positive dataset is insufficient. and the initialized values of s(k. k )} i' ≠k i ≠i &i ≠ k ' ∑ max{0.k)=0. and the repeated semantics are the meaning of the multi-sense words in the positive set. r (k . j i. k ' )} ' k ≠k ' sP = sP ∪ {d }.k) as its input. namely: After AP step. s(i. WordNet expresses a meaning with a synonym sets. reflects the accumulated evidence for how appropriate it would be for point i to choose point k as its exemplar. k ) − max{a (i. Documents meaning were determined by overlapping semantics of Documents. j ) − min s (i . 4: s ( k . k ) + a (k . Algorithm 2: Affinity Propagation Algorithm Input: (P. Semantic-based Feature Extraction in SVM Algorithm A word possibly has many meanings. k ) 7: if( nd ∈ P ) 8: sum=sum+similarity(nd. namely.L) Output: (aP) 1: sP={}. The number of identified exemplars (number of clusters) is influenced by the initialized value of s(k. k ) = − xi − xk 2 . the shared value of s(k. taking into account the support from other points that point k should be an exemplar. but not exactly the moderate number or the small number. j ) − min s(i. a(i. The affinity propagation takes a real number s(k. k ) = min s (i. 9: else if( nd ∈ pU ) 10: sum=sum+similarity(nd. the responsibilities and availabilities are iteratively computed as: sum=0 for each nd ∈ s ( k . the availabilities are initialized as a(i. AP pick out the supplementary set (sP) from likely positive dataset. the semantic feature extraction is used by WordNet through the above examples.

especially when the positive examples are few. In our experiment.885 0. The algorithm of semantic-based feature extraction in One-Class SVM is as follows: Algorithm 3: Semantic-based Feature Extraction in One-Class SVM Algorithm 1: Set all semantics appeared in LP hm_allSyn=NULL. Each category is employed as the positive class. Table 1 Average F of One-Class SVM classifier datasets γ Average F Previous Best F Reuters 20Newsgroup 4.1 0. Obviously.632 0. only the most populous 10 are used. Finally.15 percent as γ is 0. the popular F can be a good choice. The second collection is the Usenet articles collected by Lang.1. we take two experiments as γ is 0.780 0. we use tf-idf express document vector.1 0.7. Empirical Evaluation 4.1 and 0. So we draw the conclusion that out three-step method improves the performance of One-Class SVM algorithm.2.7 0. The rest(70%) are used to create training sets as follows: γ percent of the documents from the positive class is first selected as the documents from the positive set P. Forming the tfidf vector One-Class SVM classifier using semantic feature extraction One-Class SVM classifier using semantic feature extraction based on document frequency Test document represented by SVM vector Fig 1: The system flow chart In the experiment. DataSets We used two popular text collections in our experiments.742 0. Evaluation Measure There are three measures evaluate the effects of three steps respectively. and use the document vector to construct the classifier. the goal of this step is to purify U by filtering out the positive example. This gives us 10 datasets.extraction. we use each newsgroup as the positive set and the rest of the 19 groups as the negative set.7 0.45 percent as γ is 0.892 0. Table1 provides the average F score of all categories of classifier and shows that our method increases the F score by 11.7. and the rest as the negative class. To measure the effect of being purified. the precision and recall of negative data in pU (purified unlabeled dataset) are calculated and then combined as follow: f=2*recall*precision/(recall+precision). For each dataset. the effect of the second step totally depends on the precision of positive data in this supplementary set (sP). F score in test dataset is employed as measure of the final classifier’s accuracy. for the third step aiming to build an accurate classifier. 30% of the documents are randomly selected as test documents.735 0.1. and by 6.774 153 . In the data preprocessing. Set semantics appeared repeatedly in LP hm_crossSyn=NULL 2: For each document d in LP 3: For each word w in d 4: For each semantic s of w 5: If not s∈hm_allSyn 6: hm_allSyn+=s 7: else if not s∈hmcrossSyn 8: hm_crossSyn+=s 9: For each document d in LP 10: For each word w in d 11: t=tfidf(w) 12: For each semantic s of w 13: if(t≠0 and s∈hm_crossSyn) 14: output w to the file of document vector of positive examples 15: sP=LP∪{d} 16: break concerned. So far as CoTraining step is 0. and the process is as follows: Express data set as the file of sequence of woks Training set Training the set U Training the set P Test set 4. Of the 135 categories.716 0. The first one is the Reuters-21578. which has 21578 documents collected from the Reuters newswire. after remove stopwords and stemmer. For the second step. after CoTraining step.3 Experiments Experiments are comparing the methods of document frequency and semantic feature extraction for one-class SVM algorithm. Te rest of the positive documents and negative documents are used as unlabeled set U. 4. which creates 20 datasets. it attempts to expand P with a supplementary set (sP) that is extracted from LP (likely positive dataset).

W. NY. [9] J. pp. New York. K.C. and Technological Development Projects of JiLin Province under Grant No. [8] C. pp. In order to test the final classifier. [3] H. 2006.1. Han. In: Proc. Affinity propagation can improve the performance of PU classifier when the positive examples are few. “Text Classification from labeled and Unlabeled Documents using EM”.429-436. PS.39.9544 0. Its goal is that use these algorithms can improve the performance of classifier.7853 trade 0. pp. even if the given positive dataset is insufficient. pp. The result proves that the combination of three steps is superior to the former PU algorithms. Beijing. W.9779 grain 0.At last.8705 0.8333 0. FreyB. D. Gao. Yang.9433 0. of the 3rd IEEE int’l Conf on Data Mining. Liu. In: Proc.8935 0. Acknowledgement This work was supported by National Nature Science Foundation of China under Grant No. Joachims. of the 26th Annual Int’l ACM SIGIR Conference on Research and Development in Information Retrieval. MULTIMEDIA’06. Moreover. “Making large-Scale SVM Learning Practical Advances in Kernel Methods-Support Vector Learning”.Karger.8073 0. Kulathuramaiyer.10-17.X.7514 0. Chang. of the 29th Annual Int’l ACM SIGIR Conference on Research and Development in Information Retrieval.20070533.8353 0. pp. pp. Journal of Information Retrieval. Zhai. “Building text classifiers using positive and unlabeled examples”. of the 14th Int’l Conf. Mitchell. it is used to classify the data in 10 test dataset and the F score corresponding to each γ 0. [10] S. Y. Vol.9829 0.8745 0. Cohen. N. SIGIR’03.9809 0. WS. 2004. the algorithm still performs well.9021 0.9380 0.9211 0.Yu. References [1] Y.183 percent. Dueck. 179-188. 2000. In: Proc. Lafferty. [7] K. Conclusions This paper combines CoTraining in PU classification. and learns the problems of PU text classification based on affinity propagation and semantic feature extraction.R. T. D.9642 com 0.9305 dataset are listed in Table2. Thrun. 1999.8000 Avg 0. New York. [4] T. W.Dai. “Beyond independent relevance: methods and evaluation metrics for subtopic retrieval”. In: Proc.Lee. NY. pp. NY. pp.8683 0.103-134. From the comparison with the method. SIGIR’06. Vol. Machine Learning. ACM.Tian. 239-248. of the 14th Annual ACM Int’l Conference on Multimedia.8217 0.8527 0. Vol. MIT Press.8023 0.Chua. Chen. ACM. Song.8541 ship 0. Nigam. [5] K. which is an important measure of classifier. “Clustering by Passing Message Between Data Points” Science. Y. NewYork.8663 5. “Less is more: probabilistic models for retrieving fewer relevant documents. “An evaluation of statistical approaches to text categorization”. “Diversifying the image retrieval results”. J.3 acq 0. A. 67-88. Huang.9049 earn 0. [6] H.315. McCallum. on Web Intelligence(WI2004). of the international conference on Knowledge Discovery and Data mining. 1999. 2007.” In Proc. we had trained the final SVM classifier from the purified unlabeled dataset (pU) as negative and the expanded positive dataset as positive. 972-976. 2002.60803102. No 1/2. One-Class SVM increases the F score by 10. X.166-172 154 . “PEBL: Positive example based learning for Web page classification using SVM”.8701 wheat 0. IEEE Computer Society. 707-710. In: Proc. Table 2 F Score of 10 datasets after three steps interest 0. Melbourne.7863 crude 0. pp.8924 0.5 0.7 0.8822 0. T. IEEE Computer Society.8742 0.8666 0. Li.7896 money 0. J. [2] B. S. 2003. “Semantic feature selection using WordNet”. 2006. 2003. ACM. Yu.

International Conference on Computer Engineering and Technology Session 3 .

.

6-8].. mobile interaction is often a secondary task performed while doing something else. with the amount of information that can be displayed on the small screens at one time extremely limited. and Chinese typography [1-4. where screen space can be traded for time to present temporal information in a dynamic manner. Currently.tw Abstract Leading display represents a mechanism for exhibiting temporal information instead of spatial information to overcome the limitations of small-screen mobile devices. resulting in a lack of information on how changes in the context of tasks affect the ability of users to perform effectively.com. mobile interaction is often a secondary task performed while doing something else. Chien-Cheng. Taiwan roland@mail. In case of presenting notifying information. display space on mobile devices is at a premium.tw information. Therefore. font size. which is widely used to show additional notifying information. the attention of leading-display users cannot always be assumed to be only on reading leading-display information. Consequently. text/background color combination. fourth. anywhere at any time. the possible applications of leading displays and the implications of these findings on reading Chinese text are discussed. essentially giving access to any information. jump length. the string of text moves from right to left sequentially along a single line within a small screen. Gui Shan District. speed. Introduction The future is certainly looking mobile and wireless. However. Ubiquitous computing and personal networks are believed to be the near-future paradigms for our everyday computing needs. including presentation mode.mcu. Speed and presentation mode significantly influenced the participants’ reading comprehension. Previous Chinese dynamic-display studies have examined the effects of several leading-display factors on the visual performance of users. Actually. users would be performing other task with 1. Previous studies examining this area focused only on information presented by leading displays. Actually. that is. or fifth day of usage) for a small screen. dynamic display is a means to present notifying information to users while they are reading static information on displays. Moreover. the attention of leading-display users cannot always be assumed to be only on reading leading-display information.edu. Frequently.2009 International Conference on Computer Engineering and Technology Ergonomic Evaluation of Small-screen Leading Displays on the Visual Performance of Chinese Users Yu-Hung. the use of such devices may be influenced by the context of tasks and adaptability of users was somehow disregarded. Chien Department of Product Design Ming Chuan University 5 De Ming Rd. Gui Shan District. Taiwan ccyen0706@yahoo. the reading comprehension of participants in each set of reading conditions has been measured only once. The rapid development of ubiquitous computing has led to the increasing use of mobile devices to read text 978-0-7695-3521-0/09 $25. Taoyuan County 333. In most leading-display studies. the connection between visual performance and previous experience in using leading displays. An assessment of the adaptability of users to leading displays. second.2009. the majority of leading-display studies have been conducted under idealized single-task conditions. in leading display used on websites. According to the results of the studies [1-4. third. The results showed that leading display design factors did not distract participants from static information search tasks but did affect participant reading comprehension on leading displays. speed and presentation mode were the two most critical factors for visual performance. Yen Graduate School of Design Management Ming Chuan University 5 De Ming Rd. 6-8]. for instance. requires further investigation.21 157 . Taoyuan County 333. One possibility in overcoming this limitation is to use a leading display.00 © 2009 IEEE DOI 10. this investigation performed a dual-task experiment (a search task for static information and a reading task for leading display information) to examine the effects of leading-display factors on the visual performance of users during different stages of usage (whether current usage is the first.1109/ICCET..

and adaptability (day 1–5 of use) determined the visual performance of Chinese users in using leading displays to accomplish dual tasks on small screens. it is important to address the disconnection between the actual use of leading display and the effects of leadingdisplay factors on users’ visual performance in terms of dual-task scenario. the number of discovered target characters was recorded. participants were required to finish the search and reading tasks in 30 s.Shieh & Lin [4]. users can read dynamic information showed on a single-line display of a facsimile machine. participants searched for 10 of the same characters among 100 Chinese characters on the smart phone screen and simultaneously read a 30-character leading-display passage. Example of the experimental interface used in this study 2. In each trial. At the end of the time period. which consisted of a static Chinesecharacter search task and a leading-display information reading task. and Wang & Kan [8]. Wang. Participants were also asked to respond to two multiple-answer. the leading display repeated continuously. Method We conducted a 3 × 2 × 5 repeated-measures dualtask experiment to examine the effects of speed at 250. At each session. The distance from the center of the screen to the desktop was 8 cm. Speed settings were based on the studies of Chien & Chen [2]. Lin & Shieh [3]. Consequently. et al. were conducted on a Sony Ericsson P910i Smartphone (Fig. For instance. Lin & Shieh [3]. The smart phone was positioned at an incline of approximately 105°. Fig. The smart phone was placed on a 75 cm high table with a desk synchronization stand.attention focusing on the dynamic displays. . and the tasks of searching and pressing keypads have to be accompanied with simultaneously. Figure 1. Photo of the experimental interface showed on an real-world P910i device Twelve college students (native Chinese speakers) from Taiwan were selected as participants. [6]. with all text material displayed on a 208 × 320 resolution touch-screen. The dual tasks. 2 shows the interface design in which the leading and static displays are presented simultaneously on a smart phone. 1). multiple-choice comprehension questions based on the leading-display content. presentation mode (character-by-character or word-by-word) was based on studies by Chien & Chen [2]. Wang & Chen [7]. During this period. During the experiment. The leading display could be maximized to display ten Chinese characters of 14-point Chinese typography on a single-line display. and Shieh & Lin [4]. and 450 characters per minute (cpm). 350. participants performed two trials under each condition and repeated the same experimental procedure on days 1–5. An environment involving both leading and static displays was constructed. Text was presented in black on a light gray background. the screen was 40 cm. while the distance from the participants’ eyes to the center of 158 Figure 2.

However.18 0. Chinese script belongs to the logographic system in which characters are used as writing units. p< 0.92.17 0.87 Adaptability Day 1 0.79 Table 2. Results Analysis of variance (ANOVA) was applied during the statistical analysis of the experimental data and Tukey’s HSD (honesty significant difference) test was used for post hoc comparison.035 2.12 0.07 0.20 0.07 0.10 0. The character-by-character presentation mode on the leading display required that subjects partially use cognitive resources to divide the characters into the smallest meaningful units.666 Speed × Mode 0.595 Day 0.003 0.09 0. we found that no factors significantly affected the static-information search task score.11 0. In contrast. This finding suggests that users can receive more information using a well designed leading display on small-screen devices without sacrificing visual performance efficiency on a static-information search task.01. but they did influence comprehension. and no salient boundary exists between characters. 4. However.11 0.25 0.732 2 0.83 Day 3 0.82 Day 2 0. The adaptability factor did not significantly affect reading comprehension.93 0.007 2 0. Scores of search and comprehension for each independent variable Independent Static-search Leading-display variable score comprehension Mean SD Mean SD Speed (cpm) 250 0.95 0.76 Word-by-word 0.866 14.038 1.95 0.000 a 0.95 0. 4].055 Speed × Day 0.127 4 0.22= 14. users had the highest reading comprehension using the word-by-word presentation mode at a speed of about 250 cpm.279 Significantly different at =0.84 430 0.80 Day 5 0. F2.16 0. p< 0.302 8 0.634 Mode × Day 0. The effect of leading display design on visual performance for static information thus is also an important issue.83 Day 4 0.09 0.947 0. and none of the interactions among factors was significant.05. The level of significance was set at α=0.01 ). 2. 6.084 1 1.22 0. According to the analytical results presented here.619 0.073 1.27 0.001 a 0.72 Presentation mode Character-by-character 0.60.919 Mode 1.253 Source a 0.93 0. In this study. 159 .11= 21.09 0. 3. none of the leading display design factors distracted participants from the static search task. The impact of leading display factors on the static search task was relatively low. 7].127 0. ANOVA table for mean comprehension of the leading display Type III Sum of Squares df Mean Square F Speed 1. presentation mode and speed significantly affected reading comprehension (F1.95 0.032 0.15 0.Table 1.141 4 0. a leading display does not usually appear in isolation and is designed to accompany static information. For the leading display.08 0.084 21.94 0. Discussions and Conclusions Previous studies of visual performance for leading displays focused only on assessing the adequacy of leading display design [1.23 P 0. and none of the interactions among factors was significant. Reading comprehension was significantly higher for the word-by-word presentation mode than the character-by-character format and was also higher for the 250-cpm speed compared to 350 or 450 cpm. This finding is in accordance with previous Chinese leadingdisplay studies [1-4.01 level.390 Speed × Mode × Day 0.581 8 0.96 0.14 0. Table 1 shows the means and standard deviations of staticinformation search scores and leading-display reading comprehension and Table 2 shows the ANOVA analysis for the mean comprehension of the RSVP display.065 0.94 0.94 0.88 350 0.

5. speed. Wang. [6] A. 23 (1–2). Displays. thereby lowering the cognitive load and promoting reading comprehension of dynamic text [2-4]. Kan. Shieh and Y. Chen. Perceptual and Motor Skills.H. In terms of user adaptability to leading displays and speed. [3] Y. Sun. International Journal of Industrial Ergonomics. M.K. 2005. 27 (4–5): 145–152. International Journal of Industrial Ergonomics. Leading displays present a trade-off between space and time. 160 . 2006.W.H. Fang. which is a novel presentation technique to most users [3.H. [8] A. References [1] C. Wang and Y. 2005. and L. Shieh. Comparative patterns of reading eye movement in Chinese and English.J. 2003. and dynamic technique. 100: 865–873. H.H. 133–138. International Journal of Advanced Manufacturing Technology. Lecture Notes in Computer Science. Reading a dynamic presentation of Chinese text on a single-line display. The use of dynamic display to improve reading comprehension for the small screen of a wrist watch. 4].C. Suggestions made in this study may assist interface designers in developing effective leading displays that promote improved user Comprehension for small-screen mobile devices. and C. Wang and C.C. Morita. The leading of dynamic information displays has recently been identified as a means of presenting text on electronic device screens. Dynamic Chinese text on a single-line display: Effects of presentation mode. 2007. Chien and C. text/background color combination.F. this helped users to divide the Chinese characters into Chinese words. speed. 31 (4): 249–261. Effects of display type. Chinese typography. Taiwan. making the boundaries between the words more salient. Thus. Chien. Republic of China under Project NSC 97-2221-E-130021-. 100: 1021–1035. previous studies on leading displays have shown no significant improvement in reading comprehension with time of use increments. and text/background colour-combination of dynamic display on user’ comprehension for dual tasks in reading static and dynamic display information. 4557: 814– 823. [2] Y. 2003. Perception and Psychophysics. temporaldomain.H. Further studies are needed to investigate how users acquaint themselves with and accept this novel presentation technique by increasing time of use to improve reading efficiency for Chinese text dynamically displayed on a small screen.H. 6. J.H. Chen and Y. Effects of screen type. Chen. and 250 cpm is far slower than the average Chinese reading speed of 580 cpm [5].H. 2004. [7] A.the word-by-word format allowed the addition of spaces between words. 37 (6): 502–506. Stark. Lin and K. Lin. Effects of VDT leading-display design on visual performance of users in handling static and dynamic display information dual-tasks. [5] F. [4] K. users may need more practice to familiarize themselves with this novel. Perceptual and Motor Skills. and jump length for VDT leading display on users’ reading performance. Acknowledgment The authors gratefully acknowledge the financial support from the National Science Council. The results of this study demonstrate user comprehension of a variety of leading display designs. Effect of dynamic display and speed of display movement on reading Chinese text presented on a small screen.K. Chen. 32 (2): 93– 104. 1985.

there is our conclusion in Section 6. He adopts the shape function of model. Feature descriptor. which is related to the geometric characters such as distance of the random two points on surface. 9]. In [8. which often use the surface geometric features to describe models. However. we use mean curvature and corresponding coordinates of the vertexes on the surface as the feature descriptor of model. such as the normal vector on the surface of models [11. lizm@hdpu. Section 5 is our results show and analysis during experiment process.cn. The paper left parts are organized as follow: Section 2 shows the previous works. we bring EMD (Earth Moving Distance) method into the similarity measure frame. this means is not very accurate and maybe suit to classify models rough.00 © 2009 IEEE DOI 10. normal direction [4. and use it in retrieval further. More and more 3D models appeared. In the paper. we introduce the curvature on surface of the model to show the model well. we concentrated the geometrical and topological features on the surface. 9] Osada proposes a descriptor named shape distribution. the results implied that this descriptor could be used to find the exact model from the PSB model library. The descriptor we defined describes models accurately and keeps invariant for transformation and rotation.16]. It can show the curving degree very well for those models with curving pieces and the ones with extreme points or extension components. Particularly for models which are made up of curving pieces and those models with extreme points or parts. 7]. So. Generally. in this paper. It got well result by doing a retrieval experiments on 314 models in PSB. which show the shape characteristics directly. we try to introduce curvature into the 3D mesh models. Distance or geodesic distance on the surface 978-0-7695-3521-0/09 $25. At last. Then he computes values of one of the functions and gets a statistic to describe the model. Zongmin Li School of Computer Science and Communication Engineering. make use of curvature function to get seven views of the model to describe the model in [10]. the result showed much better than other feature descriptors. Meanwhile much more 3D modeling software is developed significantly and 3D models are not only in large number but also in complex format of data type. [8. Mahmoudi et al. In this paper. yaoxiaolan2002@163. area of pieces [13]. 12].edu. The shape of the model is fundamental and the lowest level feature. II. to describe the model.1109/ICCET. 3D model retrieval I. So there are many methods extracting features through the models’ surface shape attribute. This comparison way is very efficient and especially adapt to these conditions: different numbers of the two features. Keywords—Mean Curvature. cube root of volume of the random tetrahedron made up from model and so on. Xiaolan Yao. So. . variant or indefinite dimension of the features. It has also been implemented in many practical areas prevalently. How to find the exact model to satisfy the practical needs has become a heated topic. Traditional classification of these model features goes through two main periods: from the model shape to the content of 3D models. EMD.49 161 3D model retrieval methods are in the increasing trend in the past years. 3D model retrieval has always been an important research issue during the past decades. P. the feature we can extract from the model is to describe the object accurately.cn Abstract—Curvature on the surface of 3D mesh model is an important discrete differential geometrical descriptor. China University of Petroleum (east of China) Dongying 257061. cord-based method [14]. which are all the shape character.China e-mail: liuyujie@hdpu.com. Many methods are based on shape of 3D model. So in this paper. Similarity measurement is described in Section4 particularly. RELATED WORK INTRODUCTION Computer graphics technology and some 3D scanning devices are all improved largely in the past decades. Thought several retrieval experiments. As we known.edu.2009 International Conference on Computer Engineering and Technology Curvature-based Feature Extraction Method for 3D Model Retrieval Yujie Liu. shape is easy to be received to humans’ perception directly. volume [10]. histogram [13]. Afterwards. square root of the triangle area made up from the random three points on surface. Additionally. and so on[15. Especially. How to describe a model accurately is the key problem in the retrieval subject. Similarly they also compute a statistic about the function values. we present a method based the curvature on the mesh model surfaces. There are many other means based on this statistic way to show characteristic of models. and so on.2009. Section 3 presents our feature extraction method in detail.R.

our descriptor is the feature in 50 dimensional. connect the circumcenters of its 1-ring neighborhood triangles. Our descriptor Our descriptor is defined by the following formulate: F ( M i ) = ( P. At last. apply it to describe the model directly. N F is the descriptor of any model M i in our library. κ ΑVoronoi = 1 ∑ (cot α ij + cot βij ) xi − x j 8 j∈N1 (i ) Α 2 (1 ) And then we use the mixed area Mixed to compute the mean curvature vectors by the following formulate: Figure1. Firstly.We used the Meyer method to compute the curvature of each vertex on the mesh surface. Features and weights in this method can be defined up to you. The conditional similarity comparison method is not satisfied our needs. α ij and βij are xi x j relationship between the curvature vector of vertex i and its value. i = 1. we can get the mean curvature of every vertex on the triangle mesh surface easily. but also based on the coordinate vertex dates. In my method. z j )}. our method shows the better results. WAN Li-Li used this method to compute distance between the spatial distributions of different components in every two models [11]. III. choose the N biggest curvature vertexes. 2. j = 1. for those models with smooth surface or distinct topological features. The speed of the computation is so fast and it is a well metric tool. 2. In the paper. It is made up of two sets: P and H . Especially. ⋅⋅⋅n. For each point P on the triangle mesh surface. where the left picture presents the change of the curvature on the model surface. and the green ones are those feature point we have chosen according to the curvature value. we set N equals to 50 firstly. and the color from red to blue stands for the curvature value from big to small changing. the Voronoi areas depend on the types of mesh triangles. This distance belongs to the engineer implementation concept. Figure 2 shows the definition of the Voronoi briefly. Secondly. the area which be constructed is the Voronoi area of the vertex P. which shows the xi . The right one shows the vertexes of the whole model. we can compute the area by the formulate bellow: κ H = {k j }. P = {( x j . x K ( xi ) = 2κ H ( xi )n( xi ) (3) Note: in all above formulates. it can measure the two models with different dimension features and weights. Because of our feature is not only related to the curvature value. Where. ⋅⋅⋅. The curvature of one face 162 . these angles relationship is shown in figure1. after analysis of all models we choose from PSB. rotation. ⋅⋅⋅. Figure1 shows the curvature result of one face model with curving pieces. and x j is the neighbor of xi . and then compute the mean curvature on each vertex [7]. They are usually known as the assistant tool combined with other features. according to formulate(3). In our experiment. y j . and combined them with the standard coordinate position of each vertex to show the overall model[7]. Mean curvature computation Based on the above principle of the finite element method. we applied the EMD method to measure the similarity between different models. we must use the Voronoi area on surface of the model. the bold font means vector. the former is the coordinates of the vertex on surface of the model. j = 1. B. and the latter is mean curvature. projection and scaling under canonical coordinate system. N (4) CURVATURE FEATURE EXTRACTION The common used geometrical characters such as normal vector direction or some other surface angles information often has simple computation and these features is not considered as the feature of one object directly. A. For the non-obtuse triangle. So. This descriptor is invariant to transformation. K ( xi ) = 1 2ΑMixed j∈N1 ( i ) ∑ (cot αij + cotβij )( xi − x j ) (2) Where N1 (i ) stands for the neighbor triangles of vertex the opposite angles of the edge respectively. κ H ) . moreover. we compute the Voronoi area of the 1-ring neighborhood triangles on the model surface. we compute the differential geometrical attribute—mean curvature.

wq1 ). Faces 138 3907 40384 31649 8 Time(all vertexes) 0.237080 1. And the ground distance matrix where dij is the ground EMD is to find the flow between p q distance between cluster i and j . the retrieval result shows much better. it needs to make sure the N will be smaller than 100.279705 2. Method Presentation Earth mover’s distance can be dating back to the transport problem. We recognized the mean curvature as the weight of the signature. ⋅⋅⋅. where representative of the cluster and the cluster.267323 B.( p2 . If we need to deal with other large mesh models.000796 0. Traditional comparison method uses Euclidian distance. pi is wpi is the weight of Q = {(q1 . and then be implemented in many areas. V. it only spent 0. The goal of the fij F = [ f ij ] with the flow Model ID s0661 s0838 s0961 s1734 Verte x 71 1981 21013 160940 IV. we can change these constraints and make the SIG_MAX_SIZE much larger. So. 163 . wq1 ).000858 seconds on computing the feature we want. It can evaluate the dissimilarity between two multi-dimensional distribution in some features space where distance measure between single features. TABLE1I. one can be seen as a mass of earth properly spread in space. The evaluation tool we used is also in the PSB benchmark. Minkowski distance,Hausdorff distance and so on. Q ) = ∑ ∑ m n i =1 m i =1 qi and p j that minimize the overall cost [12]. given two distributions. It improved that our method got the well result.3 seconds approximately on computing the N biggest vertexes. especially for those models with curving pieces and those with extreme points or parts. we only list four models with different size in vertexes and faces. wqn )} D = [ d ij ] . In the experiment. we defined that the SIG_MAX_SIZE equals 100.000858 0. TIMING OF OUR METHOD NEEDED Let P = {( p1 . After the curvature computation. (qn . we adopt the earth mover’s distance (EMD) means common used in engineering implementation in our method. All the time listed is for Intel Celeron M 430 1. this descriptor is suitable to the library with large number of models. we introduce the EMD method to measure our descriptor and then used it into retrieval experiment. and record the time spent to compute the mean curvature of all vertexes and the N biggest ones respectively. w p1 ). EXPERIMENT RESULT Our experiment is based on the triangle mesh models in PSB benchmark database [13]. the signature is composed of the mean curvature and the coordinate values for each vertex on the model surface. So. Intuitively. We choose 314 models from the 1814 models and it contains 47 categories and 37 non-empty classes. we applied the EMD method to compare the similarity and use the model in this database to finish the retrieval process. Here. w pm )} be the first signature with m clusters. the other as a collection of holes in that same space. the dissimilarity between the different models can be valued and computed fast.025905 0.( pm . (q1 . Our Measurement In our method. Then. w p 2 ). j =1 n fi j di j (5) f j =1 i j EMD SIMILARITY MEASURE In this section. The experiment results were compared to the Shape Distribution and EGI descriptors. which we call the ground distance is given [12]. is the second signature with n clusters. and defined the Euclidian distance between the coordinates of vertexes as the ground distance function. the EMD measures the least amount of work needed to fill the holes with earth. It is not difficult to see that the largest model s1734 will only spent 2.022880 0.883898 Time(N biggest) 0. A.73 ghz with 512 RAM. because it has the lower time cost and feasible to describe different models. a unit of work corresponds to transporting a unit of earth by a unit of ground distance. ⋅⋅⋅. Run the process of the EMD comparison.Table1 shows the time our descriptor computation cost. These means are limited to the dimension of features and related to the numbers of the feature points in two models will be compared. Besides our descriptor is chosen the N biggest mean curvature. ∑ ∑ EMD ( P . for the sake of avoiding this problem. and for the small size model s0661.

we compare our method with the Shape distribution and EGI (Extend Gaussian image) respectively and get one plot showed in Fig8 below. normal curvature and direction on the mesh surface. VI. spider. our method got well results. it is notable that the red line is on above of the green and blue ones. because each result all contains 314 models and they are in great number. through the retrieval experiment. We can improve the mean curvature descriptor has higher retrieval quality than shape distribution and EGI methods. In our future work. the weakness of the descriptor is that it does not adapt to the complex models or other models with plain pieces. the result showed that our descriptor is much more excellent than the other two. bottle. By the Precision-recall diagram we can compute the precision and recall for each retrieval with all the models listed in the retrieval results. we propose a new descriptor for the 3D triangle mesh model. From the precision-recall plots of all models (fig. 3) and all classes (fig. Using EMD method to compare features we computed in our experiments. The whole efficiency Applied all the 314 models to do the retrieval experiment. The biggest virtue of our method is to describe the curing pieces and the extreme points or parts of the model accurately. Retrieval results We don’t list all the retrieval result. Average precision and recall of all classes This work is supported by National Key Basic Research Program of China (2004CB318006) and National Natural Science Foundation of China (60533090. feline. The first model plot is the model we want to retrieval. 4) in our experiment. On the other hand.A. ACKNOWLEDGEMENTS Figure3. Average precision and recall of all models Figure2. Figure4. While we compute the P-R curves. and face. Retrieval result The figure 2 shows part of our results for five models: head. we found that the descriptor showed well results and especially for those models with curving pieces and those with extreme points or extending parts. 164 . Moreover. We can also combine it with other features to get much better feature expression and try to solve the inaccurate description for other general models with plain pieces. got the precision-recall plot of all the models and compared it with the other two methods. CONCLUSION In this paper. The descriptor combined the vertexes’ mean curvature on the surface of the model and the coordinate position of these vertexes. and the left three are the results showed in order of ascending dissimilarity. we can continue to express the model with other differential geometrical operators such as Gaussian curvature. B. 60573154).

M. Graphics. In: Proceedings of IEEE on Image and Vision Computing.M.Discrete DifferentialGeometry Operators for Triangulated 2-Manifolds[A] . In: Visualization and Mathematics. 18(11):2902-2913. Santa Barbara . S. M. In proceedings of ACM SIGGRAPH. Hilaga. Vincent Couillet and Mohamed Daoudi.Brady. FL: CRC Press. 33:33-80. M. Rioux. pp.59-66. Estimating the tensor of curvature of a surface from a polyhedral approximation. 1986. 5th Intl Conf. Kohmura. Gabriel Taubin.Jain. A content based search engine for VRML database. Symp. Jurgen Assfalg. Computer Vision. Proceedings of 12th International Conference on Image Analysis and Processing. M. 2002. 21(4). Yossi Rubner. Leonidas J. [12] E. “Matching 3D Models with Shape Distributions”. [O.S. et al. Patrick. 2001. and Image Processing. pp. Processing: Image Communication. J. 2nd ed. Wan Lili. Thomas. A query by content system for threedimensional model and image databases management. 2003. California . T. Robert and F. 6th Intern. and Image Processing. (in Chinese). 21(4):807-832. Rioux. et al. 1995. M. Shape Distribution. 807832. M. Hilaga. O. 1998. Journal of software. Kunii. C. E. pp: 345-352. 2007. Thomas. The Gaussian and Mean Curvatures" and "Surfaces of Constant Gaussian Curvature. SpringerVerlag. Schroder P.167-178. 2002. T. 1999. pp. T. Kohmura. Rioux. Topology Matching for Fully Automatic Similarity Estimation of 3D Shapes. Jean-Philippe Vandeborre. P. Quebec . Boca Raton. Paquet. Naveen. Shinagawa. T. 1998. Proceedings of International Conference on Shape Modeling. Curvature Approximation for Triangulation and Quadric-Based Surface Simplification. In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition . Kunii. Paquet. pp. A. K Michael. Szymon Rusinkiewicz.REFERENCES [1] Gray. Philip.Ponce. Description of shape information for 2-D and 3-D Objects Signal. A. Genova. Mahmoudi .J. and H.C. 3D shape histograms for similarity search and classification in spatial databases. A Practical Approach for 3D Model Indexing by combining Local and Global Invariants. A Metric for Distributions with Applications to Image Databases. T. pp. 1985. and M. Ankerst. Describing Surfaces. A. 203-212. 1997. USA. [13] [2] [3] [4] [5] [14] [15] [16] [6] [7] [8] [9] [10] [17] [18] [19] [20] [21] [11] 165 . Estimating Curvature and Their Derivatives on Triangle Meshes. 32(1):1-28. Robert. Shinagawa. Graphics. Proceedings of 16th International Conference on Pattern Recognition( ICPR2002). Desbrun M. Proceedings of 2nd International Symposium on 3DPVT 2004. Zhao Qinping. 52-58. 6:103-122. Proc. 373-380 and 481-500. Berlin. pp.Asada. 2002. 2004. The Princeton shape benchmark. Retrieval of 3D Objects using Curvature Maps and Weighted Walkthroughs. Hao Aimin. Bernd Hamann. On Spatial Databases (SSD’99). Daoudi. China. In proceedings of ACM SIGGRAPH. Y. A Method of 3D Model Retrieval based on the Spatial Distributions of Components. Invariant surface characteristics for 3D object recognition in range images. November 1999. Hong Kong. 2002. Bernard. Guibas. on Computer Vision (ICCV’95). Tabatabai.Paquet.Yuille. Carlo Tomasi. 2001.541-546. 2 :457~460. Journal of Computational Geometry: Theory and Applications. ACM Transactions on Graphics. et a1. Proceedings ICCV pp.Besl and R. Modern Differential Geometry of Curves and Surfaces with Mathematica. Y. 3D models retrieval by using characteristic views. Murching. Meyer M. Topology Matching for Fully Automatic Similarity Estimation of 3D Shapes. F. Proceedings of the First International Symposium on 3D Data Processing Visualization and Transmission (3DPVT. 203-212. ACM Transactions on Graphics. Computer Vision. Ottawa. 2000. M. 1997. E.2002).

iut. hendessi@cc.2]. Different algorithms and protocols have been proposed for this process.5G which provide low bandwidth services over a wide geographical area. WLAN to 3G). a separate path is provided for 3G traffic.shafieinejad@ec. 1: Tight and loose coupling architectures. 1. 3G networks and WLANs can employ different protocols for authentication and accounting [5]. network delay. b Azad Islamic University.4]. Alireza Shafieinejadc. In horizontal handoff.2009 International Conference on Computer Engineering and Technology A New Method for Vertical Handoff between WLANs and UMTS in Boundary Conditions Majid Fouladiana. Mahdi M. traffic. updating and authorization of user connections and the CoA of the foreign network. 1.ac. The need for transparent services to end users makes vertical handoffs between these networks unavoidable. thus the WLAN traffic is not transferred via the 3G network and routing to the Internet is via the WLAN gateway.1109/ICCET. One of the most challenging problems in these networks is QoS which must not experience significant changes during the handoff process. Morteza Rahimib. After selecting the proper network. the MN initiates the handoff and connects to the target network.rahimi@gmail.1 Nonhomogenous integrated networks ETSI has proposed two methods for connecting WLANs and 3G networks. the mobile node (MN) enters a smaller cell from a larger one such as 3G to WLAN. MN speed and movement patterns. which introduced two new elements. Generally. of Electrical and Computer Engineering Isfahan University of Technology.iut. In the vertical case.com. The handoff process mainly depends on the network architecture and its hardware and software elements. Performance and delay evaluations which done by ns-2 are presented to demonstrate the effectiveness of the proposed scheme. a. 1. Faramarz Hendessic. the MN enters a larger cell form a smaller one (i. Vertical handoff can be divided into upward and downward cases [1].Bayatc a Azad Islamic University.ir m18. Management of handoff protocols in the network layer is based on MIP1.ac.166 166 . those such as 3G and 2. Iran c . the only parameter for selecting a network is channel quality which is Fig. Iran c Dept. These agents provide synchronization. loose and tight. must be considered in selecting a suitable network. In a loose connection. Nonhomogenous networks contain a number of wired networks with different technologies.ir. Thus we need to employ an efficient algorithm for handoff in terms of initializing and completing the handoff operation [1. However.e. there are two categories of wireless networks. other parameters such as service cost.00 © 2009 IEEE DOI 10. in addition to channel quality. In this case. Qazvin. Special conditions such as boundary condition also considered.com Abstract We propose a new vertical handoff method in which the number of signaling and registration processes is lowered as a result of reduced utilization of home agent and correspondent nodes while users are mobile. as depicted in Fig. and those such as WLANs which provide high bandwidth services over a small geographical area. Saveh. Home Agent and Foreign Agent to address the problems of mobility [3.com mahdimbayat@gmail. In the downward case. and in upward case. Introduction estimated by the received signal strength of each cell.2009. Iran majidfoladi@yahoo. Downward handoff usually has low delay sensitivity while in the upward case since the MN switches to a cellular network before disconnecting from the WLAN delay is not a serious problems. In a tight 1 -Mobile IP 978-0-7695-3521-0/09 $25. Throughput performance of the integrated network is improved by reducing handoff delays and packet losses. bandwidth. in upward handoff the MN tries to maintain its connection with the WLAN because of better QoS and lower cost.

j ( w s . While entering a WLAN from a UMTS network. while for users we store an access and priority list of those WLANs a user can connect to. As mentioned above. respectively. ⎡ ⎤ f n = W n = ∑ ⎢ ∏ E sn. center of the WLAN. In addition. and the threshold. and then we explain the details of our protocol for upward and downward ones. Next. the SRNC computes the approximate position of the MN and compares it with that stored in the WUD. the FA sends a binding update message to Node B to cleanup the allocated resources. delay and cost. when a CN requests the position of a MN from the SRNC. y ) . are tunneled to the new IP address by the FA.connection. After assigning the new address. Since the source address of the packets from the MN to Fig. j ) (2) s ⎣ i ⎦ j If the MN does not find a suitable AP with a cost function better than current UTMS. while connecting to the MN. and finally forwarded to the MN. The SRNC collects the required information from the LMU. In this method. The SRNC regularly evaluates the MN position and relays this information to the CN. a WLAN is connected to the 3G network in the same way as other radio access networks. a message is sent to the MN to turn on its WLAN interface. denoted the WUD to RNC unit. the MN gets a new IP address from the WLAN. In the next sections. the MN determines if there is an AP with better quality than the UMTS network according to a weighted function [7] of the AP parameters. The MN turns on its WLAN interface and searches for advertisement messages from nearby WLAN APs. The WLAN WUD contains the geographical position and general specifications of the networks such as bandwidth. the CN requests the MN position from the SRNC via the Iu interface. 3 is initiated. The received packets from the CN which have the old MN IP address (which is valid in the UMTS domain). routed to the AP. 2. At an appropriate time. upward and downward handoffs differ because it is not possible to use 3GPP estimation when the MN is in a WLAN network. If the MN nears a WLAN boundary. i ⎥ ∑ f s . and then the FA registers the MN and responds to the AP. we explain the upward and downward handoff techniques.1 Downward handoff We assume the MN is connected to the CN via Node B and moves in the UMTS network. 2: UMTS/WLAN vertical handoff architecture. j ) N (Q sn. ( x0 . A comparison can be done based on the inequality ( x − x 0 ) 2 + ( y − y 0 ) 2 < ( R − R th ) 2 (1 ) where ( x. The Proposed method In our approach the mobile node (MN) locations in the UMTS network are estimated as the MN nears the WLAN. Otherwise. We first describe the process of turning on the WLAN interface. the SRNC signals the MN. In this scheme the UMTS’s SGSNs are considered as FA and also the WLAN’s APs in each SGSN are considered as a B Node for that SGSN. The AP sends the registration request which contains the new CoA address to the FA. Afterwards. On the other hand all packets from the MN to the CN must be sent using the old IP address. Fig. R and Rth show the current MN position. a message is sent to the MN to turn on its interface. 2 3 -Observed Time Difference Of Arrival -Uplink Time Difference Of Arrival 4 -Global Positioning System 167 . we employ a database of user and WLAN information. y0 ) . 2 shows UMTS/WLAN vertical handoff architecture In our scheme. then UMTS. 2. the handoff process shown in Fig. evaluates the MN position and finally responds to the CN [7]. radius of the WLAN. it ignores the handoff process and continues its current connection. If the difference is below the threshold (Rth). which is valid for the UMTS network. We estimate the location of the MN using 3GPP methods which can be categorized in four major groups [6]: Cell identifier OTDOA2 UTDOA3 GPS4 Regardless of the method employed. all of the packets from the CN with the old IP address. the FA is informed of the new IP address of the MN (which is valid in the AP domain). must be tunneled to the MN by means of this new IP address. It becomes a part of the 3G core and all WLAN traffic is transferred via 3G networks.

the MN must find another network. the FA sends a binding update message to the AP to cleanup the allocated resources. Node B forwards the request to the FA which registers the MN and then informs Node B. At the beginning of the process. Fig. This process is illustrated in Fig. 3: The UMTS to WLAN handoff process. If the measured strength is below Sth for at least a threshold interval (Tth). 2. it must continuously measure the strength of the received signal and compare it with the threshold Sth. otherwise it will be disconnected. only one upward handoff is required. these packets must first be routed to the AP through the new IP address and then forwarded to the CN. M5: MN signaling flow.5: The WLAN to UMTS handoff process 168 .2 Upward handoff We consider the MN is connected to the CN in the WLAN network. M11-13: Data path for the neighboring WLAN. Moreover.5. If the system rejects the request.the CN is the old MN address. Fig. 4: Signal flow for downward handoff M4 Sending target network information. In this case. Fig. While the MN is in the WLAN network and connected to the CN via the AP. M14-15: Data path from the MN to CN. the MN will send a handoff request to Node B. 6. Then all packets sent between the MN and CN are transmitted through Node B.

2. Fig. This process needs a registeration in CN which causes more delay than previous handoffs. we consider the case where the WLAN is not covered by one Node B. Affer registeration in HA. but when the connection begins in a UMTS network. This is discussed in detail below. Thus for reducing the effects of packet loss. 2. we offer propose that FA1 routed the packets with MN destination via B Node1 and in addition send a copy of them to FA2. This part of the protocol was simulated and the results show an improvement in handoff delay. HA 169 . the MN connects to the CN via the first Node B and then with a downward handoff enters the WLAN and accesses the CN through the AP. This process is the same as in Section 2. 6: Signaling flow in an upward handoff starting in the WLAN.2 The case of two Node Bs under two FAs When a Node B is under coverage of two different FA (fig. 9) before and after of handoff.1 Two Node Bs under one FA According to Fig.3. i. 2. while in WLAN and the received signal strength is below SRth. we identify two cases based on the number of FAs related to these Node Bs.. For these cases the downward handoff is the same as in Section 2.8: Signaling for upward handoff when the source and destination FAs are similar. responses was returned to FA2 and then to B Node2. Moreover. the protocol is completely transparent to the HA and CN which do not need to know the new IP address of the MN. this message must be forwarded along a path B Node2-FA2FA1. it is different than in Section 2.1 and upward handoff is the same as in Section 2. As the strength of received signal form AP in MN become less than treshold. For correcting MN address in packets from CN.3. M1-4: Data path between the CN and MN before handoff. For the situation where the WLAN is covered by two Node Bs. the Node Bs have a common FA or different FAs. MN sends registeration request to B Node2 which in its turn forwarded to FA2 and then HA repectively.Fig. Then the MN exits from the WLAN and enters the second Node B domain.e. 2.2. Fig.2 in spite of the existence of two different Node Bs.3 Boundary conditions In this section. 7: Special handoff when the source and destination FAs are similar. MN received advertisement message from B Node2 and then send handoff request to B Node2.7 .M1-5: Data path between the CN and MN before handoff.

repectively. For cleanup the resources allocated for MN in B Node1. both tight and loose connections were considered. 11 and 12 show the delay in downward handoff. The above results considered a WLAN that is completly located within a UMTS network. MN sends a message containing the ID of the last received packet to FA2 via B Node2 which make it possible for FA2 to forward the received packet with MN destination to MN. 10. Fig.M1-5: Data path between the CN and MN before handoff. For upward handoff. The results where the WLAN is located between two Node Bs under one FA are shown in Figs. Similar results are shown in Fig. It is not necessary to register in this case and a reconnect request to SGSN is sufficent. 3. Thus the packet lost during handoff is minimized. Fig. 13 and 14 for tight and loose connections. 170 . An improvement was achieved relative to the MIP protocol. 10: Upward handoff signaling for different source and destination FAs. These results are shown in Figs. FA2 send a connection updating message to FA1 which in its turn forwarded to B Node1. IEEE 802. In this case.12 in which thw FA and AP do not have a wired connection so packets between them are routed through the Internet. After receiving updating message. 12: Downward handoff delay in loose coupling. Figs. Figs. the MN connects to the CN via the SGSN address and enters the WLAN. FA1 must send all of packets with MN destination to FA2 until CN registers the new address of MN. This process is illustrated in Fig. Second. First the MN initiates connection with the CN in the WLAN and then enters the UMTS network. respectively. 15 and 16 for tight and loose connections. Handoff delay is computed as the difference in time between receipt of the last packet in the previous network and the first packet in the new network. Fig.must send updating message to CN too. the Internet delay was selected randomly between 0 and 55 ms.11 shows a tight architecure between the AP and SGSN. Performance Results We use NS-2 for simulation [8]. To eliminate the problem of packet lost during handoff delay. After this message valid addresss of MN is the one which was got from FA2. Fig.11b was selected for the WLAN in addition to a UMTS network. Then it initiates handoff to UMTS during the connection with the CN. 11: Downward handoff delay in tight coupling.

the MN has two addresses. and Zhu. pp. K. [5] Salkintzis. The only change is in UMTS where SGSN must tunnel packets to the MN after receiving the new MN address. 9. [3] Perkins.. 16: Upward handoff delay in loose coupling in the case of two Node Bs with one FA.... [7] McNair. A.Fig. 2005. in both coupling tight ones which AP has wired connection with SGSN was considerably reduced and cause a higher network throughput. J. 8-15..3. URL http://www... Glenn. Q.. Aug. May 2006. pp. “Interworking technique and architecture for WLAN/3G integration toward 4G mobile data networks”. 5.Zeadally. no.” IEEE Commun. Mag. . “Vertical handoff in fourth-generation multinetwork environments”.Internet Draft. Special conditions such as boundary condition also considered This method reduces the interaction between the CN and HN. IEEE Wireless Commun. pp... June 2004. June 2005.“Mobility management across hybrid wireless networks: Trends and challenges. Fig. C. Guo. 41. While in the WLAN.331. Radio Resource Control (RRC): Protocol Specification.. Version 6.F. 1363–1385.S. 50-61. et al. Aug..” Computer Commun. Z. and the CoA address which belongs to the WLAN and is used for routing packets to the CN via the AP while in the WLAN.6. June 2004. [2] Siddiqui. no. pp. The processing time and handoff delay were both reduced. [8] Network simulator. [6] 3GPP TS25. the UMTS address which is used as the source and destination of transmitted packets.0. C. and R. F. vol.isi. 4. 11. vol. 13: Upward handoff delay in tight coupling Fig 14: Upward handoff delay in loose coupling Fig. Conclusions We proposed a new vertical handoff method which does not require any modifications of the WLAN structure. 29.edu/ nsnam /ns/. RFC 2404. 2003. 3. IEEE Wireless Commun.. Guo. W.102-108. and Zhu.K. 171 . “Efficient mobility management for vertical handoff between WWAN and WLAN. 11.. vol. vol. 15: Upward handoff delay in tight coupling in the case of two Node Bs with one FA. 2002 [4] El Malki. Low Latency Handoffs in Mobile IPv4. no.. “IP Mobility Support for IPv4”. References [1] Zhang.

On the one hand. The secure problem of distributed systems including protection. 3) distributed system is devoid of controllability and manageability. the techniques of distributed system are excessive. In this paper.Information Science and Engineering Institute. and strengthening the manageability of distributed systems. miscellaneous. Nanjing. Through building the trustworthy model between the distributed system and user behaviors. as far as the architecture is concerned. e-mail: qiuhp8887@vip. By setting up the trustworthy computing circumstance and supplying the trustworthy validation and the active protection based on identity and behavior for trustworthy distributed system. intensifying the survivability of services. through building the trustworthy model between the distributed system and user behaviors. Nanjing. trustworthiness.edu. PLA Science and Technology University. and strengthening the manageability of distributed systems. intensifying the survivability of services. A few users are able to fall short of this kind of demand. For example: 1) distributed system is built on the insecure terminal system The most prominent secure problem of terminal system is prone to suffer from the erosion by the worm virus and Trojan horse. the core function of secure algorithms and chips should be fully exerted. constructing the architecture of trustworthiness distributed systems.00 © 2009 IEEE DOI 10. 2) distributed system is scare of the trusted safeguard measures Practices indicate that the worm virus and Trojan horse could not be kept out with firewall. Southeast University. Through building the trustworthy model between distributed system and user behaviors. controllability and manageability of system resource. and antivirus 978-0-7695-3521-0/09 $25. these products are extra add-ons. At the same time. and recent developments and progresses are surveyed. In section1. the core function of secure algorithms and chips should be fully exerted.China e-mail: blue_horse@126. and strengthening the manageability of distributed systems. Moreover. The overstaffed abuse is gradually revealed. survivability I. new ideas are needed to resolve the problems such as security and function of distributed systems. controllability. Nanjing. run-states. the security problem of . constructing the architecture of trustworthiness distributed systems. most remote-behavior is unpredictable At the present time. manageability. IDS. Southeast University. Therefore. these products make high requirement for administrators.com Abstract— To arrive at the goal of intensifying the trustworthiness and controllability of distributed systems. those capabilities are absolutely necessary not only for the security of distributed systems but also for health and continuance of development. This research insists that the security. we analyse the situation of security in distributed systems and points out the necessity to build trustworthy distributed systems. the technical trends and challenges are briefly discussed. INTRODUCTION At present. Keywords-trustworthy distributed system. some important programs and files will be destroyed. intensifying the survivability of services. constructing the architecture of trustworthiness distributed systems. On the other hand. The secure problems of distributed system need be solved and provide more reliably and simply controllable means to construct the trustworthy environment [2]. creditability and manageability is to be radically solved. Distributed system is confronted with the serious crisis of trust[1]. these will lead to the drop of performance of the whole distributed system. To arrive at the goal of intensifying the trustworthiness and controllability of distributed systems. China e-mail: aqhu@seu. the secure problem of distributed systems is to be radically solved. and the cost of implementation is great. 2. The influence on the performance of distributed system is increasingly complex.sina. Because the bulk of terminal systems do not adopt enough safety precautions. controllability. At the same time. and survivability should be basic properties of a trustworthy distributed system. PLA Science and Technology University.Institute Command Automation. other goal systems will be attacked by the worm virus and Trojan horse. we present a new scheme for constructing a trustworthy distributed system and make research on secure key techniques.cn Hangping Qiu Institute Command Automation.100 172 software. Therefore.2009 International Conference on Computer Engineering and Technology Research on Secure Key Techniques of Trustworthy Distributed System Ming He 1. capability of distributed systems is insufficient in mangy conditions such as user behaviors. China.1109/ICCET. The security of service of distributed systems is improved effectively and development of Electronic Commerce and Electronic Government is promoted healthily and fleetly. we will reach the goal of defending the unaware viruses and inbreak. The key ideas and techniques involved in these properties are studied.com Aiqun Hu Information Science and Engineering School. In section 2 we introduce the notion of trustworthy distributed system.2009. They ascribe the passive defensive forms and can not cope with the secure menace which is increasingly variational.

states detection and correlation computing. trustworthiness information is stored on efficient format for 173 . A. based on trustworthiness of service supported by distributed system. user behavior and its result is divinable and controllable in the trustworthy distributed system. Around maintenance of trustworthiness and manageability of behavior between components those three attributes of trustworthy distributed system can be amalgamated to arrive at the goal of trustworthiness of distributed system. Unlike conventional researches on security. 4) Trustworthiness of remote action: comparing with trustworthiness of task. The significance of this paper is present in section 4. and are formed an organic whole around trustworthiness maintenance and behavior control. quickly querying and updating. To achieve the security strategy of whole system. while aftereffect of action can be estimated. trustworthiness of action requires to forbid some actions and to restrict activity of remote user. Trustworthiness information is collected by several methods. survivability and controllability. Trustworthiness maintenance and behavior control. To complete trustworthiness of distributed system. the foundational method which verifies system trustworthiness is studied The role of security is to reduce vulnerabilities on the chain of trustworthiness gathering. as well as difference between requirement and development is magnified. process and output. it must be partitioned to several independent physical modules which need to verify identity and trustworthiness of behavior each other. this research has four steps: (1) a trustworthy model of distributed system and user behavior is present based on exiting security technology through analyzing requirement of trustworthy distributed system. (3) an archetype is built to intensify security of trustworthy distributed system. behavior.distributed systems is radically resolved in section 3. content and computing environment. The third step is the difficulty in the whole study. In that way. Distributed application is secure only when the platform is credible. technology and production of the distributed system. (4) an estimation theory of trustiness is present to verify the archetype. In the TCSEC of Us Department of Defense formal description. KEY TECHNOLOGY OF SECURITY Figure 1.e. All the study is based on the second step. authority of termination must be restricted. How to build trustworthy model which analyzes distributed system and user behavior availably is the precondition to study the trustworthy distributed system. OVERVIEW OF TRUSTWORTHY DISTRIBUTED SYSTEMS At the present time. the remote node may be counterfeit or controlled by Trojan horse. security protocol and intrusion prevention. which are isolated and separated. i. and spread to corresponding components for correlation computing. In another word. Around algorithm. Status of behavior can be supervised and unconventional action can be managed. 3) Trustworthiness of remote task: when the distributed application id executed. Then. In this paper. II. (2) a core chip of security is designed and completed. spreading and processing. based on trustworthiness of distributed system. Controllability provides detailed mechanisms to monitor systems states and control misbehaviors. as depicted in Figure 1. The perfect archetype is built through amalgamating pivotal security technologies such as security arithmetic. Trustworthy model Trustworthy model is the pivotal process in development of system. input. The divergence to opinion may result in illegibility of definition on trustworthy distributed system and aggrandize difficulty of estimation for maneuverability of resolvent. there are different cognitions to the trustworthy distributed systems: based on dependable authentication. while the conceptions of security. which are divided into three parties. distributed system. The last two problems are more difficult. survivability. controllability and manageability are decentralized and isolating in traditional sense. Controllability of remote action refers to distribution and controllability of authority. there are four problems: 1) Trustworthiness of remote user: trustworthiness of user is the precondition of security in distributed system and needs identity authentication terminal system so that termination and user can be controlled separately by distributed system. verification and covert channel analysis is needed form Standard B. 2) Trustworthiness of remote platform: trustworthiness of remote platform contains trustworthiness of identity and computing environment. The study on security chip is the foundation of substrate hardware. The last step is the soul of all the study in which the archetype is verified using trustworthiness estimation to prefect mechanism of trustworthiness of distributed system. From identity. initiator of task can affirm that the task has been done inerrably. resources management and schedule under the circumstance of existing misbehaviors. The first is the main step in the research that the effect to security of distributed system is studied according to the characteristic of user behavior. such as behaviors analysis. the trustworthiness of distributed system has a set of attributes that security and survivability must be ensured in user’s view and the manageability of distributed system must be supported in designer’s view. the three properties are closely related in trustworthy distributed system. otherwise. Survivability can be thought of as a special control mechanism. based on conformity of exiting security technologies. III.

their trustworthiness of identity must be authenticated by the trusted third part (usually it is the center authentication. The server verifies identity of client through authorization with its signature. server can know which authorization the client has at any moment and grant or retract some authorization to client. a secure kernel chip SOC which is constituted with kernel of RISC and multiprocessor is designed. The evaluation of user's behavior is comprehensive evaluation for user's behavior. Every node can verify trustworthiness of trusted proxy on the platform of destination through EK after destination is verified through authentication with signature of CA. Firstly security vulnerability of system can be analyzed by mathematics model because the requirement of trustworthiness of system is described abstractly and exactly without implementation details. this problem is solved easily. In this paper. also that chip measures up to the standard of TPM. while a trusted proxy is disposed on the client. That architecture. The elements in analysis of trustworthiness are present in fig. recommendation of trust and record of behavior. subdivided and quantitative idea to convert the complicated and comprehensive evaluation of user's behavior into the measurable and computable evaluation of behavior evidences. We use the layered. Estimation of trustworthiness of behavior contains trustworthiness of behavior and identity. There are two primary advantages in that model. All the authorization of client must be distributed by server and client has no prerogative to authorize any authorization to other client.3. Authorization of client encapsulated in SKC with encryption is requested from server which supports the key of decryption. then enhanced security architecture is built in trustworthy distributed system. Secure Kernel chip It is the foundation to trustworthy distributed system on hardware that SOC technology is adopted to design secure kernel chip. since authentication of platform contains its authenticated key. In the architecture based on C/S. This problem is avoided easily in distributed system based on C/S for database management of user and authority is centralized and the view of authorization is consistent for all nodes. If authorization is distributed only by server and client must request authority from server. Enhanced security architecture Consulting exiting architectures of security operation system this paper puts forward the architecture of trustworthy termination system based on building trustworthy model through secure kernel chip plays an important role in the controllability of security system. user’s information including authorization and identity is managed by a trusted centralized management platform exiting in the server. which is reflected by user's past behaviors. Therefore. a kind of analysis model of trustworthy model. A simple architecture of trustworthy distributed system which avoids disadvantage like traditional affixed security mechanism is present in fig. such as security behavior attribute and performance behavior attribute. because other node may be controlled through the authorization once it is accredited to destination node. while the latter is based on trustworthiness of content such as capability of protection and service. The latter behavior is safe in the view of the owner of authority but unsafe in the view of other node. so that security service is consistent in the view of system. The chip can be applied to different cryptographic algorithms while the cost and efficiency of cryptographic algorithm are affected lightly. which realizes trustworthy access controllability and supports flexible security strategy. there is one point to be realized: physical positions of nodes in distributed system are dispersed. Figure 2. (2) Enhanced security architecture based on C/S In distributed system based on topology of P2P authentication to destination is needed to verify particularly in the light of trusted model due to the lack of uniform view of authentication.2. C. Thus the evaluation of user's behavior in the trustworthy distributed system can be solved effectively. Therefore. Furthermore.through amalgamating method of verifying trustworthiness and traditional Take-Grand model. introducing trusted subject. B. Then we subdivided behavior attributes once again into more small data unit. (1) Enhanced security architecture based on P2P There is not a central manager in the architecture of P2P to manage nodes and users. namely behavior evidences. we first subdivide user's behavior into behavior attributes. restricting acquirement and authorization used only by trusted subjects and adding regulation to verify trustworthiness in model. CA) and every node can authenticate destination node through proof supported by trusted third part (usually it is the signature for certificate of platform and user identity by CA). To research architecture of trustworthy distributed system. as a result trustworthy model of distributed system and user behavior is built. secondly security trustworthiness of system is improved by the use of formal description and verification. can ensure the security of mechanism. So this method is feasible in engineering. A security chip which can be applied to different cryptographic algorithms is needed imminently because of different applications with dissimilar cryptographic algorithms. 174 . also security algorithm and key storage are completed inside chip. secure kernel chip (SKC) supports authentication for trusted proxy in own node and verifies its trustworthiness when the computer system starts. In order to get evaluation result effectively.

Based on the enhanced terminal system. and immunity [8]. [3] [4] [5] 175 . also the computer is used exclusively. Though there is no absolute safe system. 2005. PENG Xuehai. Research on trustworthiness of network[J]. profiting from flexible ability of chip to support security algorism.Figure 3. evaluation theory of trustworthiness which contains estimation of security. LIN Chuang. Software Journal. 2005. Quantifying of trustworthiness is also at exploring stage. IV. Emphasizing architecture and quantitative analysis on survivability of distributed system based on the transform between problem and space. 4) security of access control: by the trusted monitor in enhanced terminal system. 21(5): 732-739. REFERENCES [1] [2] Cyber Trust [EB/OL]. 2) security of identity authentication: it is no possibility that counterfeit user can operate system illegally because security level is heightened and certification has three steps “USB Key + password + biology keystroke characteristic” . It is also insurances of performance of the whole trusted distributed system. Compared with traditional security architecture.nap. After those measures. 5) security of system audit: all kinds of behavior or event logged in enhanced security system support history of events and supervise afterwards. the ultimate goal of estimation trustworthiness of distributed system is not to remove weakness completely but support a scheme to balance service and security for administrator and a measure to defense attack actively such as that through building mechanism to describe behavior of attack. survivability and manageability is the precondition to monitor and revises system. Random model method and evaluation technology of network security[J]. Based on the certification technology PKI. the whole view of authority controllability is built on the server. http://www. perfect machination of key management and security defense. trustworthiness of any access is verified to insure the trustworthiness of identity. In that paper. New network with trustworthiness and controllability as well as expansibility[J]. A architecture sketch map of trustworthy distributed system. 28(5): 751-758.edu/catalog/6161. Computer Journal. Security algorism of key is ensured while encryption or decryption of data and digital signature authentication are completed. Enhanced security of distributed system is helpful for increasing authentication of security laws and deterrence to hacker. trusted computing technology and trustworthiness management technology of networks are integrated to resolve the conjuncture with distributed system and enhance the ability to dispose states dynamically so that enhanced trustworthy security system is built. security management in the whole distributed system is put into effect to every host and user. 2006. the identity of platform is safe and the risk of defense that counterfeit system intrudes distributed system is reduced. Trustworthiness evaluation is performed based on some preconfigured models.2006. with logic instructions generated to drive particular control actions. estimation of trustworthiness is transformed to a method frame of classic problem. PENG Xuehai. behavior and computing environment in the access from subject to object. hardware-encrypted platform like TCP is completed. since the identity of platform is the unique sequence code of secure kernel chip with 64-bit in enhanced security system. The study in that paper hasn’t published at present at home and abroad. the trustworthiness of network and grid system is improved and the development of remote cooperation and distributed application is accelerated. The prominent theoretical significance and practice value is present as follows: 1) It is a helpful reference to design and complete the security system and it is also a direction for study on theory and method of security controllability in distributed system. attack prewarning. That system supports a basal strategy to intelligent self-adaptive controllability of system security and service quality. Quantitative research on trustworthiness could find the weakness and risk in the distributed system and improve them. such as forwarding control. Computer Journal. REN Fengyuan. 2) User’s controllability to distributed system is enhanced to insure that system authentication is used exactly and reduce the risk of system out of control bring by virus and Trojan horse. 15(12): 1815-1821. 3) security of permission assignment: in enhanced security system inner process scheduling and outer process access is verified strictly under the instruction of trustworthy model so that permission of system is insured to be safe. Evaluation theory of trustworthiness It is not possible to build a perfect trustworthy distributed system. Therefore. Monitoring of client behavior is completed using trusted proxy of client. LI Linquan. LIN Chuang. 2004. aggressive behavior is taken out from plentiful normal behaviors and access controllability of terminal system is completed. 28(12): 1943-1956.html. Journal of Computer Science and Technology. so the quantitatively trustworthiness estimation of distributed system is valuable. LIN Chuang. D. LIN Chuang. Research on network architecture with trustworthiness and controllability[J]. 3) Security can be separated from management. CONCLUSIONS Trustworthiness is an important aspect to the study on distributed system. survivability control. security controllability is applied in enhanced architecture of trustworthy distributed system in four aspects: 1) security of password: using secure kernel chip of terminal system. taking advantage of C/S architecture and simplifying strategy of authority controllability. WANG Yang. 4) Based on trustworthiness of distributed system.

28(5): 751-758. Research on Trustworthy Networks. 176 . Computer research and development. et. 37(2): 129-150. May. [7] [8] SHEN Changxiang. XIAO Bing. MA Xiaoxue. ZHANG Huanguo. Summarazation of information security[J]. et. Xue-hai Peng.al.[6] TIAN Junfeng. China Journal of Computer. 2007. Trustworthy model and analysis in TDDSS[J]. China Science: E(Information Science). 44(4): 598-605. FENG Dengguo. Chuang Lin.al. 2007. 2005.

54Mbps for MPEG video. . picture. Some technologies are implemented to make multimedia contents possible for users without wideband network: image-based slide synchronized with audio and cursor movement. System Overview 2. anytime without wideband network to study as their own styles. Japan hezheng@nii. [4] WebELS system presented in this paper provides a multimedia e-learning platform for non-broadband users. and manager. the multimedia content download is streaming-like.1. Functions 2. Some main features are list below: x Browser-based: no download and pre-installation are necessary. for users in non-broadband areas or without wideband network at hand.2009 International Conference on Computer Engineering and Technology WebELS: A Multimedia E-Learning Platform for Non-broadband Users Zheng HE. but also provide students various materials and flexible tools to study in their own learning styles. The WebELS editor aims to provide editing toolset for users to develop their own multimedia contents from raw materials in various formats such as: . [1-3] Generally. Today's multimedia applications have a wide range of bandwidth requirements: 64kbps for telephone quality audio. Both asynchronous and synchronous e-learning are supported on this platform. x Broad file type support: text. consistent quality of service and multipoint packet delivery. some technologies are implemented to make the multimedia contents possible for users without wideband network at hand: the multimedia content was converted into slide serials synchronized with audio and cursor action.108 . Editor. x Unicode: multi-lingual is supported. Therefore. Introduction The rapid development of computer and Internet technologies has made e-learning become an important education method. [5] 2. player 2. Features WebELS (Web-based E-Learning System) is an online e-learning platform aiming to not only assist teachers to develop.1109/ICCET. Students can access those contents anywhere.GIF. media streaming-like downloading and offline player embedded in the downloaded content. to support multimedia contents there are three things that a network has to provide: bandwidth. JPG. audio.PPT. Moreover.ac. 177 978-0-7695-3521-0/09 $25. and an offline player was embedded into the multimedia content. a multimedia-support e-learning platform called WebELS (Web-based E-Learning System) for non-broadband users is introduced. the multimedia e-learning with acceptable quality seems to be an impossible task.00 © 2009 IEEE DOI 10. It includes three main modules: editor. x Always-on and Real-time: teachers can archive their lectures or even carry out virtual classroom over the Internet by WebELS.DOC. player and manager.2. video. 1. The system includes three main modules: editor. Contents title and category should be assigned at first. 1.1. flash contents with presentation pointer are supported. . etc. deliver or archive multimedia contents on the web.2009. which can provide flexible information associated with instructional design and authoring skills to motivate the learning interests and willingness inside students.2.PDF. audio and video to create multimedia content. x Flexible authoring: all necessary tools to create contents online are provided. . So that teachers can create or edit their own multimedia content by transferring or integrating raw materials.jp Abstract In this paper. etc. Jingxia YUE and Haruki UENO National Institute of Informatics. 128kbps to 1Mbps for video conference. then achieve the contents on WebELS server and manage course list or access permission. One of the most attractive features of e-learning is its capability to integrate different media such as text.

Not only the contents originally prepared in PDF. Comparison of picture quality between image-based and video contents Figure 1. manage user information. Usually contents can be presented as the following delivery modes: pure slide.Then those materials are converted into image series which are called as slides in this paper. 2.3. Moreover.2. Database server is to provide data storage for achieved contents. 2. Image-based content An e-learning course based on multimedia contents may be highly bandwidth intensive and students using non-broadband network may have difficulty in playing and retrieving such contents. Multimedia Contents 3. supports contents editing. Main tasks are executed by server side and only small work is carried out on browser side. The WebELS player provides users a friendly platform to learn multimedia contents online or offline. and verify course dependency. The WebELS content is presented by a sequence of image frames instead of video stream. PPT or DOC formats but also those pictures used to describe lecture scenes (such as GIF. Player. side and perform data transformation with database server. Architecture of WebELS system Web server is responsible for users’ requirements. this kind of system can be accessed and used without downloading or installing other tools beforehand. all user interfaces are implemented on web pages. The WebELS management toolset was developed to help users manage the achieved contents including content category. Architecture In WebELS system. And all user account information is maintained by SQL database server. Zoom-in function is available so that details on slide can be displayed clearly. 2. it will much lighten the loads on client-side computers. and log and registration information.2. Under this structure. 3. Manager.1. JPG and PNG) will be automatically converted into a kind of image format by the WebELS editor before they are uploaded to the WebELS server. the image-based picture with 800×600 pixels is clearer and brighter than video picture with 512×342 pixels. audio and cursor synchronized slide. content permission. 178 . Figure 1 describes the architecture of WebELS system. player and manager is based on a typical Browser/Server (B/S) structure. Figure 2.2. users can make attractive multimedia contents by inserting audio synchronized with cursor movements or some video and flash clips into one slide. implementation of editor. The player was designed as a standard media player and users can easily control their contents. Moreover.3. Java Servlets response the interaction with the Java tools on clientFigure 2 compares picture quality between imagebased contents provided by WebELS system and video contents played by Windows Media Player. video clips. Obviously. A customized HTTP server handles web-based user interface includes HTML and JSP pages.

the audio stream is paused and the cursor is pointed to the location (x1. the cursor positions are synchronized with audio stream in each slide file by the time shaft.f4) t5 (s5. y)=(x1. (s.y6) t6 t7 (s7. Figure 3 is an example of image-based contents with enough resolution to ensure complete details in the picture.3. Normally. 0). f)=(stop. y) chronologically. f1) and (x. Figure 4 illustrates how the SPEEX audio files are implemented on WebELS system.f7) Audio stream (xi. SPEEX provides a free alternative to expensive proprietary speech codec and lowers the barrier of entry for voice applications. which is unpractical for non-broadband users.f0) t1 (s1.f1) f1=f4 f5=f7 Figure 5. Therefore. Select one slide to be edit Use sound pick-up outfits? No Yes Read audio data from files Read audio data from sound pick-up outfits Decode audio data to raw format Encode audio data to SPEEX format Save SPEEX audio data into the selected slide file Figure 4. f)=(play. f) for the embedded audio stream file simultaneously. WebELS contents with zoom-in function 3. image-based contents would be a substitution. Users can edit the slide and point a cursor onto corresponding slide to tip the topic they are talking on. and there is no cursor action. In WebELS. ii) when t=t1. SPEEX audio files Audio is indispensable part in an e-learning course.y3) t2 t3 t4 (s4. A synchronized cursor In WebELS system.yi) tI tn Figure 3. y1). 0) and (x. Each slide has its own timeline in order to: i) record the cursor positions (x. By freely zooming or dragging such content based on high-quality images. Implementation of SPEEX audio files 3. or show the points they want to strengthen. an audio format with high quality as well as low size is necessary. the audio is embedded into the contents by WebELS editor. to improve the audio quality would result in large size and heavy load of the contents. For examples: i) when t=t0. y)=(0. through sound pick-up outfits were encoded into the SPEEX format. Synchronization of cursor positions with audio stream As Figure 5. The audio datum either recorded and saved previously or simultaneously transmitted t0 (s0.So if users have to limit the contents size due to network. SPEEX is an audio compression format designed especially for speech.f5) Timeline (x6. iii) when 179 . ii) store the information (s. It is well-adapted to Internet applications and provides useful features that are not present in most other codec. in which s is the current state (play or stop) and f is frame numbers. students can see those fine details which are difficult to recognize in common video contents. (s. Cursor (x1.2. and vocal files with high quality assist students to understand lectures more clearly and actively. y1) on this slide.y1) (x3. a cursor file is also embedded into each slide of the contents to simulate lectures’ action. the audio stream is started.

Therefore the duration of audio stream is normally shorter than cursor action although both of the actions are simultaneously recorded. Comparison 4. a continuous audio stream can be dispersed at different cursor positions according to the frame numbers recorded by the timeline. the WebELS contents consist of imagebased slide series and each slide includes the following information: image. Table 2. From the table. yi) on this slide. fi) and (x.4. and only the WebELS contents can play smoothly.9MB 800×600 Smooth and stable Time delay (s) 50 40 30 20 10 0 56K 256K 512K 1M 10M Network bandwidth (bps) Figure 6. 3. The original content was a 42-minute video file with AVI format which was real-time recorded in classroom. 1Mbps and10Mbps to test their sensitivity to network condition. the content size was much decreased while the image resolution was improved. after one data package was downloaded. a streaming-like download process is available based on such data packages. yi). the two kinds of contents were played under network with bandwidths of 56Kbps. (s. The data of contents are stored into each slide file as separate data packages instead of one complete file. both of the two contents were tried out by using 56K modem under the same condition.1. General Streaming multimedia system can not only ensure that teachers create appealing multimedia contents but also allow students to access multimedia contents without wideband network. the client side’s player would begin to display the slide 180 . See Figure 7. and those clips are also considered as buffering packages. And the audio data as well as correspondent cursor data are divided into some clips. WebELS contents were easier to open compared with video-based contents. Basically. y)=(xi. Compared with real-time recording system. Then the Bandwidth Controller (tool for bandwidth controlling) was installed in the client-side computer to simulate various network bandwidths. it was hardly to open the video-based content. Data structure of WebELS content 4.t=ti. Firstly we installed the streaming Figure 7. WebELS content could be opened with some delay during playing. Also in Figure 5. f)=(play. the audio stream has processed fi frames and the cursor is pointed to the location (xi. Comparison of the time delay under various network bandwidths Furthermore. server named GNUMP3d for video-based content and WebELS server on the same server machine. Moreover. When the bandwidth is less than 256Kbps. audio and cursor. Comparison between video-based and WebELS contents Size Image resolution Streaming quality 80 70 60 Video Content WebELS Content 4. Streaming multimedia Comparison was carried out between video-based and WebELS content as Table 2. Implementation Basically. the file size of the audio stream which was edited based on timeline was decreased. 256Kbps. WebELS server supports streaming-like download. Figure 6 shows the time delay which was necessary to open the two kinds of contents under different network bandwidths. Then the video based content was edited by WebELS editor and converted into the image-based content with audio and cursor files attached.2. Video-based 154MB 512×342 Time lag and delay WebELS 3. timeline. And if the bandwidth is under 10Mbps. Therefore. 512Kbps.

No. …. 181 . Conclusions Set j=1 Download cursor clip j and audio clip j Set j=j+1 Is j>M? Yes Set i=i+1 No No Is i>N? Yes End Figure 8. x The interface style is same as online viewer so that the offline learning mode can ensure the maximum feasibility for users without network. Vol.1. 2.33-38.ex. [2] P. an audio clip cursor data should be downloaded at first. N). 5. IEEE Multimedia. R. Services have been made available on the Internet and its functions are continuously extended to cover user’s requirements. While the first audio clip is playing. IEEE Transaction on Education.aptivate.M. a kind of image-based contents combined with synchronized audio and cursor was created by an editor module provided by web server. 2. No.A Personal View and Issues”. download of the data package for the next slide was continued. Namely.ac. References [1] C. Offline Player WebELS system also provides an offline player for students without settled network connection.1.JAR package by the WebELS server. within the data package of one typical slide. …. Moreover. “Enhancing Student Learning Through Hypermedia Courseware and Incorporation of Student Learning Styles”. [4] “Web Design Guidelines for Low Bandwidth”. M). IEEE Transactions on Education. Vol. In order to ensure the multimedia learning materials accessible under the network with limited bandwidth.1. Lee and W. The process of streaming download in WebELS system is described in Figure 8. Suppose a WebELS content consists of N slides (i=1. Vol.42. and in each slide the cursor and audio data are divided into M clips (j=1. it has the following features: x The offline player was embedded into WebELS contents so all contents are like portable textbook. Streaming-like download was implemented so that such multimedia contents can be displayed smoothly even on client-side non-broadband network. the size of audio data usually takes a higher percentage than image. The following figure describes the implementation process of the WebELS offline player.. “Developing and Implementing Interactive Multimedia in Education”.org/webguidelines/Multimedia. Implementation of the offline player Download image and timeline of slide i 6. So we also adopt streaming technology to audio data included in one slide. After users decide to download the e-learning contents. then the slide is displayed and the users can start to study. pp. pp. http://www. the .A. the next audio clip is being downloaded.as one seamless stream. “A revolution in education”. the WebELS server will automatically create a container which includes three main parts: encrypted 7.39. necessary libraries and WebELS contents.jp/.430-435.45-52. Vol. Then such container was converted to a . No. Set i=1 Figure 9. Sullivan. pp.D. Moreover.html [5] H. Jain. “Web-based Distance Learning for Lifelong Engineering Education .JAR package was unpacked and saved on the client-side computer.nii. Implementation of stream-like download The paper introduces an e-learning system for nonbroadband users. Ueno. Lane. [3] R. timeline and cursor data. No. x The contents bundled with the offline player were encrypted in order to protect their copyright.3. an offline player was embedded in e-learning content. Start WebELS player.1-5. Howard and W. pp. Simultaneously. Carver Jr.G.4. The WebELS system has been launched at Internet URL http://webels.1.A. Journal of Information and Systems in Education. When the download was finished.

com Abstract The shear-warp algorithm is one of the fastest algorithms based on CPU for volume rendering so far. At the present time.2009 International Conference on Computer Engineering and Technology Implementation and Improvement Based On Shear-Warp Volume Rendering Algorithm Li Guo College of Electronic Engineering,University of Electronic Science and Technology of China,Chengdu, 610054 dlbasin@163. it is difficult to achieve the requirements of interactive applications by using these two methods. with nearly not affecting the rendering speed. not needing construct the geometric describing of object’s surfaces. The shear-warp algorithm uses a method that factorizes the projection transformation in the 3D discrete data field to two steps: shear for 3D data field and warp for intermediate image. Owing to the need of resampling after changes of the viewing direction. according to the direction of the viewing ray. this paper presents a new method by mending the resampling values for image compositing to make the image quality much improved. Rendering includes surface rendering [1~3] and volume rendering. it cannot reflect the whole primal data and the specific characters. A cost function is defined for describing the two adjacent slices’ effect to the slab between them. and the nearly interactive speed of rendering is achieved based on graphics workstations by reason of the significantly reduced amount of calculation.uestc. It transforms the 3D data field to final 2D image for displaying directly. 1. The surface rendering algorithm can usually reach a fast speed. and overcome the drawback of imagealiasing in the traditional shear-warp algorithm. So. The shear-warp algorithm The basic principle of the shear-warp algorithm is shown in figure 1 [8]. Therefore. volume rendering mostly includes ray-casting [4] algorithm based on image space. Compositing this resulting value as resampling value will improve the image quality significantly. two adjacent slices are defined as front slice and back slice respectively.9 182 2. (a) . both the ray-casting and splatting methods have the problems of large amount of calculation and a long time for computing. and the resulting value can reflect the information of the slab more effectively. The values of the resampling points in the front slice and back slice are inputted to the cost function as its parameters. splatting [5] algorithm based on object space and 978-0-7695-3521-0/09 $25. volume rendering is considered as the most promising rendering technology in the field of scientific computing visualization.cn shear-warp [6~7] algorithm based on both image and object spaces. Xie Mei College of Electronic Engineering,University of Electronic Science and Technology of China,Chengdu, 610054 xiemei@ee. which is the most important part during the processes of the scientific computing visualization. unfortunately. In this method. because the resampling just occurs inside of the slices and only make a single slice’ value to determine the value of the slab between the two adjacent slices. Volume rendering is also called direct volume rendering.edu. This paper presents a new method that makes the resulting image quality much improved. by mending the resampling values used during the process of image compositing , nearly not affecting the efficiency of rendering.00 © 2009 IEEE DOI 10. but it achieves this rendering speed only by sacrificing the resulting image quality. Based on the problems of traditional shear-warp algorithm.2009. As a result. Experiments have proved this new method’s effects.1109/ICCET. image quality usually cannot meet the requirements by reason of the image-aliasing. Introduction The process of building screen image by object’s geometric model is called rendering. However. the process of resampling is changed from 3D space to 2D plane.

Image compositing In this section. the formula (1) can be approximated by: 183 . Let Mview be the view matrix that transforms points from object space to image space. decided by the viewing direction. the shear-warp algorithm consists of three primary steps as follows: Figure 3: Sampling of s(x) along a viewing ray While the viewing ray is divided up to n equal segments of length d=D/n and the s value are constant for each ray segment as shown in figure 3 [8]. Transform the volume data to shear space by translating each slice using Mshear. by transforming the resampled volume data to color value and opacity value during each slice and compositing them together. 2). (b) Figure 1: (a) Standard and (b) Factorized viewing transformation 3. As for a parallel with projection. ) ) d x . it becomes a 2D operation and much faster to compute. Briefly.1). The shear-warp algorithm is based on the idea of a factorization of the view matrix Mview into a shear component Mshear. in such a way that the viewing direction becomes perpendicular to the slices. the values of sx and sy are computed by: si=s(id) sx = m22m13 − m12m23 m11m22 − m21m12 m11m23 − m21m13 sy = m11m22 − m21m12 id Eye d (i+1)d x x The warp factor (Mwarp) that transforms the intermediate image to resulting image can be decided by Mview. since the shear is parallel to the slices. thus. The color emitted from one point of the volume is determined by c ( s ) τ ( s ) .1. The order of the compositing can be frontto-back or back-to-front. Mshear −1 M warp = M view • M shear • P −1 . So the shear matrix Mshear is described as follows: M =M •M •P cin Eye ci αi cout α in α out ⎛1 ⎜ 0 Mshear = ⎜ ⎜0 ⎜ ⎜0 ⎝ 0 sx 0 ⎞ ⎟ 1 sy 0 ⎟ 0 1 0⎟ ⎟ 0 0 1⎟ ⎠ Figure 2: Front-to-back order compositing We specify colors and extinction coefficients for each scalar value s of the volume data by transfer functions c ( s ) and τ ( s ) . 3). Let Mview3 be the upper-left 3x3 submatrix of Mview. a warp component Mwarp and a principle axis transformation matrix P view warp shear . the intensity I The sx and sy represent the translating factors in x-axis and y-axis respectively. Image compositing is to form the value for each pixel in image along each viewing ray. as described like: along a viewing ray parameterized by x from 0 to D is given by: I = ∫ D 0 τ ( s ( x ) ) c ( s ( x ) ) e x p ( − ∫ τ ( s ( x . Because the and P with warp is after the projection. The improvement 3. the shear factor (Mshear) just contains a transformation that simply translates each volume slice. ) d x (1) 0 x ⎛ m11 ⎜ Mview3 = ⎜ m21 ⎜m ⎝ 31 m12 m22 m32 m13 ⎞ ⎟ m23 ⎟ m33 ⎟ ⎠ s(x) Then. Project the volume into a distorted 2D intermediate image in shear space by compositing the resampled slices together. we firstly introduce the foundation theory of image compositing. Transform the distorted intermediate image to resulting image by warp operation using Mwarp. In this paper we use the front-toback order as shown in figure 2.

This function is relatively simple and could be a good reflection of the slab’s information. that is. Here we define the cost function as s=f(sf. a slab is formed as figure 4 illustrated. is used as the scalar value of the ray segment.3. We just discuss the first situation in this paper. Improved method There are two kinds of resampling in shear-warp algorithm. but the image quality is improved a lot compared with the traditional method attested by our experiments As demonstrated above. having nothing to do with the back slice. Considering the relatively poor image quality of the traditional shear-warp algorithm. not sf. the better the effect of rendering. That is to say the slab’s value is determined only by the front slice with s=sf. it is easy to know the fact that compositing image by formula (2) is based on the precondition of each ray segment’s scalar value being constant. The return value s represents the value of the slab. with the value of s=sf. Since the single slice can not represent the slab effectively. the length of the ray segment also represents the length of the slab.0 on a workstation (Pentium(R) 4 2. 3. as the precondition of image compositing theory mentioned before. The length of each ray segment is determined by the distance of two primal adjacent slices and the direction of the viewing ray. The proposed method has little impact on the rendering speed. Between the two slices. along the viewing ray direction. the resulting image quality is usually poor for users’ requirements. the resampling operation in shear space 3. In the traditional shearwarp method. the value for compositing.I ≈ ≈ ≈ ∑ τ ( s (id )) c ( s (id )) d exp( − ∑ τ ( s ( jd )) d ) i=0 j =0 i −1 j=0 n −1 i −1 ∑ τ ( s (id ))c ( s (id )) d ∏ exp( −τ ( s ( jd )) d ) i=0 n −1 ∑ c ∏ (1 − α i i=0 j=0 i n −1 i −1 j ) With the opacity α i of the i-th ray segment defined by α = 1 − exp(− ∫ (i+1) d id τ ( s ( x ))d x ) ≈ 1 − e x p ( −τ ( s (id )) d ) ≈ τ ( s (id )) d and the color ci defined by c i ≈ ∫ (i+1)d id τ ( s ( x ))c ( s ( x ))d x occurs just inside of each slice. the final formula of image compositing can be described as follows: α o u t = α in + (1 − α in ) α i c o u t = c in + (1 − α in ) c i (2) viewing ray sb back slice s = f(sf,sb) sf front slice Figure 5: Improved resampling for composting As the slices front and back both contribute to the slab between them. as shown in Figure 5. After correction. ≈ τ ( s (id ))c ( s (id )) d ≈ α ic ( s ( i d ) ) Then. we use the two adjacent slices to describe the slab.2. we define a cost function f to describe the effect. and the scalar value of each ray segment is to be constant. Sichuan University. The more cost function close to the real situation. In the proposed algorithm. the slab’s value s.sb)=w*sf+(1-w)*sb ,where w is the scale factor. When composting. viewing ray sb back slice s=sf slab sf front slice Figure 4: The scalar value of a slab in the traditional shear-warp algorithm We define two adjacent slices as a front slice and a back slice. Parameters are resampling values of the front and back slices. the other is resampling in the intermediate image while warping it to the resulting image. literature [8] introduced a pre-integration method. and literature [9] used a resampling method of real-time interpolation in the middle section. Scalar values sf and sb are the values of the resampled points in the front slice and back slice respectively. Results The traditional and presented shear-warp algorithm are simulated respectively in Matlab 7. One is resampling in each slice during the process of image compositing in shear space.40GHZ CPU. Figure 6 shows the image quality comparison (a) Traditional elevation 184 . 256M memory) using the same CT slices images coming from West China Center of Medical Sciences. but they both increased the amount of calculation.

“Integrating Preintegration into the Shear-Warp Algorithm”.pp. pp.214-224 [3] Cline H E. 163-169. The principle and implementation results both show that it is effective to obtain a higher image quality. [2] Doi A. but in the proposed algorithm. “Footprint Evaluation for Volume Rendering”.Lang U. pp. Koide A. Mueller K. Proceedings of the 2003 Eurographics/IEEE TVCG Workshop on Volume Graphics Tokyo:ACM Press. thank Wu Bingrong for providing the medical images for simulation. IEICE Transactions. Medical Physics. 1988. and almost does not affect the rendering speed 185 . the proposed algorithm has improved a lot as to the traditional algorithm. and thank Peng Gang for instructing writing papers in English. “Marching Cubes: A High Resolution 3D Surface Construction Algorithm”. pp.451-458 [7] Lacroute P. 1991. “An Efficient Method of Triangulating Equi-Valued Surfaces by Using Tetrahedral Cells”. Proceedings of the Symposium on Data Visualization Barcelona:Eurographics Association.Acknowledgment The authors would like to thank Zhen Zheng for helping so much in the daily study.Kraus M. 1990. pp. Levoy M. “Fast Volume Rendering Using a Shear-Warp Factorization of the Viewing Transformation”.pp.29-37 [5] Westover L. 4. pp. Lorensen W E. Computer Graphics. 8(3). Computer Graphics. “Shear-Warp deluxe: the Shear-Warp algorithm revisited”.2002. Technical Report: CSL-TR-95-678. which is more in line with the actual situation. “Two Algorithms for Three-dimensional Reconstruction of Tomograms”.95-104 (c) Traditional planform (d)Proposed planform Traditional Proposed (e) Partial magnified image Figure 6: Traditional and proposed algorithm’s image quality comparison The simulation results show that images produced by the traditional method of rendering have some visible stripes. Computer Graphics Proceedings. “Fast Volume Rendering Using a ShearWarp Factorization of the Viewing Transformation”. 1994.2003. thank for all people in 903 staff room. 21(4). “Display of Surfaces from Volume Data”. Finally. 1987. IEEE Computer Graphics and Application.320-327 [4] Levoy M. E74(1). (b) Proposed elevation References [1] Lorensen W E. Conclusions By using two adjacent slices’ value to define the slab’s value.367376 [6] Lacroute P.109-118 [9] Sweeney J. 1988. Cline H E. 1995 [8] Schulze J P. 24(4). the image quality is improved. pp. the stripes disappeared. 15(3).

Hence to summarize it up. Asterisk is a complete phone system in software. India {aleimran.209 186 . The development of Asterisk Based Voice Exchange which works on VoIP takes into consideration the various complexities associated with a conventional Private Branch Exchange (PBX). it provides various other features which one could say are almost patented with it. it routes and manipulates Voice over Internet Protocols (VoIP) Packets in software [4].2009. Paging and Voice mailing. depending on the usage requirements) Interactive voice responses (IVR). or it can help users save money on long international call usages. and phones can be hooked into that. such as paging. Conferencing.e.00 © 2009 IEEE DOI 10. the most important being its all software approach. detailed call logging into a database and many more features. Asterisk provides more than what one would expect from a conventional PBX. we can say that with Asterisk you can: • • • • • • • Provide basic service to Analog and Digital phones. (which may be from one-to-one or manyto-one. easy to customize as and easy to extend. Asterisk can bridge and translate different type of VoIP protocols such as SIP. maqadeer}@zhcet. Conferencing. It can replace large and expensive phone systems powering thousands of extensions. INTRODUCTION Fig 1: Overview of Asterisk based system [4] Over the last few years Voice over Internet Protocol (VoIP) has became an important player in supporting telephony.2009 International Conference on Computer Engineering and Technology Conferencing. Besides being almost free. It provides multiple layers. Aligarh Muslim University. Route Incoming and Outgoing voice calls over standard voice lines or the internet. 978-0-7695-3521-0/09 $25. Voicemail. India Dept of Computer Engg. To talk of a few would certainly include a reduction in the bandwidth requirements and availability of a large number of features like for example selective call forwarding and rejection [4]. Because being implemented in software.. I. Conferencing. Aligarh. Develop a call routing logic in order to choose a least expensive way to route a particular call. Paging. Instead of switching analog lines in hardware. Mohammed A Qadeer2 1 2 Dept of Electronics Engg. Voicemail. Set Up of the Work Asterisk is an open source converged telecommunications platform designed to allow different type of telephony hardware. Aligarh Muslim University.1109/ICCET. Voice Mailing via Asterisk EPBX Ale Imran1. The backbone of the system generally becomes an IP enabled network. ST-302 hardphone. A conventional circuit switching based PBX is not only expensive but limited in terms of functionality as well. VoIP. it’s extremely versatile. The contribution in achieving this goal clearly goes to the various advantages that the technology offers. configuration in SQL databases or flat files. COMPLETE ASTERISK SYSTEM A. Provide voicemail and teleconferencing services Develop complex or simple interactive menus Operate small or large queues for call centers and announcing the estimated hold time to the callers Call other programs on the system . optional web based administration interfaces. managing both TDM and packet voice at lower layers while offering a highly flexible platform for PBX and telephonic applications.in Abstract—This paper is intended to present theoretical and implementation details of various important features that are generally associated with an Asterisk based Voice Exchange i. Paging. essentially providing PC to PC data and voice communications. Asterisk is different for several reasons. Keywords-Asterisk. Music on hold to name a few. We get a variety of features On top of that we can get interfaces to the operating system and programming languages for the extreme in power. Aligarh. II. Besides this we will be concentrating on the web based approach for the configuration of the hard phones that will be used at the clients end. middleware and software to interact with each other consistently. PBX.ac. However it also supports old analog phones using gateway devices. Our approach follows the clientserver model for all subsidiary procedures. The application is developed in C language and is compatible with all the versions of Linux.

Fig 2: Snapshot after the complete installation [4] Once it has been successfully installed we can start Asterisk on the server by running the following commands: /root/ale/asterisk-vvvc. Asterisk PBX configuration Now we need to create one user in the iax. VoIP gateway service in order to enable a particular user to call others who might be on PSTN or on the same IP network. one for connection with a PC and another for connection with the existing IP enabled network. Fig 4: IAX. Then we put this cable in the RJ-45 jack labeled PC. Now since here we are interested in having hard phones at the client’s end . ST-302 is an IP enabled hard phone which uses an Inter Asterisk Exchange (IAX) protocol.conf file type=friend means that the user can make & receive calls.323[2]. 187 . In order to set up a Asterisk based Private branch exchange (PBX). could support all audio codec’s. So now we create a new user say by the name of user1. We put the Ethernet cable from our network in the port labeled RJ-45 and use the another Ethernet cable to connect our computer with the phone. IAX and H. we require the following 3 major components [3]. This one is now going to be used with the ST-302 IP phone. The following commands help us in compiling the Asterisk. context=test shows that the user is working with the extensions in this context of the configuration file extensions. allow=all means that the line which this user will be using. host=dynamic means that the IP is dynamic through a DHCP server. B. the IP enabled hardphone. cd/root/ale/asterisk make make install make samples make progdocs Fig 3: Picture showing ST-302. • • • Asterisk based PBX Phones at the clients end which may be soft or hard depending upon the requirements.MGCP. This is because the phone is using IAX protocol for being connected with the Asterisk server.conf file. We begin implementing our voice exchange by compiling the Asterisk system.conf. It has got two RJ-45 ports.we have the choice of selecting a particular hard phone among the various available ones and configuring it .

long distance outgoing PSTN calls. local calls etc [4]. if however this feature is disabled only IP to IP calls could be made. Asterisk offers the advanced features that are often associated with large. IV. ppp id and ppp pin. Besides this. So when somebody dials the number 100. Fig 7: Management & Configuration Modules of Asterisk 188 . automatic gain control(agc) . III.Now lets have a look on the extensions. voicemails and conferencing applications of the Asterisk enabled Voice exchange [3]. 4) Dial Plan Settings: In this file actions are connected to extensions.however the super mode gives you the access to all the settings .conf file that we ‘ll be using for setting up the various extensions.2. Fig 6: Web Interface for the configuration of hard phone. 2) IAX Settings: The following parameters summarize to give up the complete IAX settings: a) Use service: Enabling this particular feature allows calls to made through the gatekeeper.dial(IAX/user1) Exten=>100. dns. Here configuring the ST-302 with the help of web interface allows to perform the following operations : 1) Audio Settings: gives a user free hand as far as selection of various codec’s for the hardphone is concerned.711. The last extension is using the Hangup application. jittre size and various others. Each extension belongs to a context.dial(IAX/user2) Exten=>200.The next application which will be executed is the Dial application. working in the [text] section we have got 2 phone numbers 100 and 200. and for each of them we have created 3 extensions . It includes local ip. high end proprietary PBX’s.answer() Exten=>200. Asterisk offers both classical PBX functionality and advanced features. Infact its always a good idea to use this application in the dial plans.3.hangup() In the extensions.automatic echo cancellation (aec). and interoperates with traditional standard based telephony systems and Voice over IP systems.2.1. When somebody dials a number 100.User1 is the user which we are going to use for the ST-302 IP phone.hangup() Exten=>200. b) Local port: this is the port on which phone will negotiate registration information with the server. Making connections through the Web Interface The web interface has two access modes ie ordinary and super. it allows for various other added features as well like for example voice activity detection(vad). Here in this paper we will be concentrating in details on the paging.conf file. CONFIGURATION ON THE ST-302 IP PHONE The configurations of the ST-302 IP phone could be done with the help of Web Interface and for this we need to know the IP address of the phone . Its purpose is to hang up the line after the conversation is over. It may generally depend on the design aspect and for our implementation we have set this to G.1. Generally by default it’s 4569. iptype.conf file [test] Exten=>100. The one implemented for the configuration of ST-302 is super access mode since it allows IAX settings.answer() Exten=>100. The first does not give the access to the settings considering the IAX . A.the call will be answered by the Answer application. the ST-302 phone will start ringing and call will be connected to this phone. FEATURES OF ASTERISK Fig 5: Extensions.3. either the default context or a specific context that we have created like incoming IAX calls. Asterisk based telephony solutions offer a rich and flexible feature set. 3) Network Setings: includes the features generally associated with the networks.including the one for the IAX protocol. subnet mask.

A.1.read(secret. with the help of the following utility Usr/src/asterisk/addmailbox.4. the first is that the priority number has jumped to 102.20. unlimited simultaneous conferences. we need to make the following changes in the configuration files : [Conferences] Exten=>s.response timeout. the priority will be jumped to+101(n+101).gotoIf($[“xxx${pass}”=” xxxNONE” Exten=>XXXX.hangup [confhelper] Exten=>in.dial(IAX. 1. If the extension is not answered in 20 seconds. save.2. the lines should like this: Exten=>1.2.conference(${EXTEN}/MTV) Exten=>XXXX.5. or it may be directly dialed [4].hangup Exten=>XXXX. While installing Asterisk on the server. with unlimited participants. Voicemail This feature enables users to record messages for incoming calls that are not answered within a specified number of rings.1] The conferencing feature provided by the Asterisk offers the advantage of doing conferencing without any zaptel hardware as well doing native codec streaming.WAV file or to the voice messaging system repository for retrieval from a phone [4]. tr) Exten=1000.1. or receive busy treatment.goto(s. Incoming or outgoing calls may be transferred to a conference.20. These two factors in fact provide a rather significant boost to the VoIP base conferencing. The call conference provided by Asterisk based PBX has the following features: Security passwords to control access to who can call into a conference bridge.voicemail.answer() Exten=>in. The first thing we need to do is to create the mail box for asterisk to use.3. Moreover we also need to edit the configuration files of voicemail ie vi voicemail.5 Exten=>s.b9999 Now what we have done over here is that when extension 1000 rings .3.The second thing to notice is that the mailbox number at the end of the line is preceded by b9999.digit timeout.the first thing we do is dial phone number 1and phone number2 ie 100 and 200 make them ring for 20 seconds.2. The mailbox is specified by u9999 at the end of the line.1.102.100&IAX.20.2. tr) Exten=>2.1.pls-enter-conf-password.3. Call Conferencing The Call conference is an Asterisk solution based PBX system.response timeout() Exten=>in.1answer() Exten=>out.background Exten=>s.3.join.response timeout() Exten=>out.2.This gives a priority of 102. Asterisk comes with a voicemail storage with storage capacity of more than thousand hours and it can be retrieved from any remote phone or it could be attached with the email as .4. 189 .tr) Exten =>1000. Moreover users have the option of marking a particular message as urgent or confidential.6. dial (IAX/100.playback Exten=>XXXX. which means that no mandatory downsampling is required.4.DBget(pass=conferences/${exten}). compilation of the conferencing application.voicemail.hangup() [context] Exten=>999999.1. Moreover there are two more things noticeable over here. Joining a particular conference is as simple as dialing the extension. Exten=>XXXX.8 Exten=>s. By accessing voice portal from any phone user can listen. B.waitexten(20) Exten=>s.31.goto(conf-conferencename.40.This is in fact quite a useful feature of Asterisk which signifies that when the call comes and if a person is on.wait(1) Exten=>s. 1 dial (IAX/200.30.conf Now we need to locate the IAX section where we have added the entries for the extensions.41.4) Exten=>XXXX. delete or reply to a message received from a one or more than one group members with introductory messages.hangup() Exten=>out.background() Exten=>in. we need to execute the following commands for enabling the conferencing feature via the server: Cd/usr/src/asterisk Cd/app_conference Make clean Make Make install Once the conferencing feature has been installed.u9999 Exten=>1000.background Exten=>out.200.10) Exten=>XXXX. The call conference provides conference room system for use by all users.this indicates a busy message should be displayed to the user and the user should be allowed to leave a message. or are transferred directly to the voicemail. the second entry will be executed which is a voicemail.

“Asterisk Voice Exchange-An alternative to conventional EPBX” in Proc.the user can go to any phone in the group and has to dial the call retrieve feature code.hangup The above written configurations will allow user intercom with any extension by dialing *5XX. Future work could proceed along the following lines: • • Using an Asterisk based server for connecting two remote locations./${EXTEN:1}) Exten=>*5XX. Multiple Auto Attendants. IEEE ICCEE 2008.macro(pageext. Nabil Seddigh. 2) One-to-One Intercom: We first need to define a macro and then use it in the one to one intercom context. This project has provided us with an invaluable experience related to VoIP and collaborative efforts.3.1.hangup Exten=>s.3.In order to park a call.[4]In order to retrieve the call. but in order to access it. The above written configuration will allow users to one way page (broadcast) to all the extensions defined in the variable”One_way_paging_list”. Mhafuzur Rahman. Md.Page($.ringing Exten=>1001. Leif Madsen . the following happens • The phone gets a ringing tone • There is a two second wait(the phone is still getting the ringing tone) • The call is answered and goes straight to the voicemail menu for mailbox 9999.Page(${Two_way_intercom_list}/d) Exten=>2. Zaidul Alam. Call parking enables a user to hold a call and to retrieve it from another station within the group. will be a valuable developing guide for similar kinds of applications.2.No Op() Exten=>s. Ioannis Lambadaris “A comparative study of the SIP & IAX voice protocols” in CCECE/CCGEI. Jared Smith”Asterisk the Future of Telephony” Second Edition.wait(2) Exten=>1001.3. Asterisk has proven to be a viable PBX for future research studies.IAX Addheader(Call-Info:answerafter=0) Exten=>1.5. 1) One-to-Many Paging: [One_way_page_group] Exten=>1. REFRENCES [1] Taemoor Abbasi.2.2007 [3] Jim Van Meggelen.hangup [INTERCOM GROUP] Exten=>*5XX. In fact. Dec 2022.followed by the user’s extension id. SUMMARY We expect that the design and implementation aspects presented in this paper.Hangup.hangup The above written configuration will allow user to do two way intercom to all the extensions defined in the variable”Two_way_intercom_list as follows Two_way_intercom_list=>IAX/100 & IAX/200 V. which infact could be defined as One _way_paging_list=>IAX/100 & IAX/200. we need to add the following lines: Exten=>1001.voicemailMain.4.{One_way_paging_list}) Exten=>1. a user presses the flash hook and dials the call park feature code.2. Saskatoon.3. 2008 190 . Paging This feature supports system wide paging and single phone intercom or unlimited parking of calls simultaneously.chanisaval(${ARG1}/JS) WHERE J:is for dump& Sis for ANY call Exten=>s. Saugata Bose. Hence as a result the call is retrieved and connected to the retrieving user.1.IAX. Music-on-hold and various other features of the Asterisk based PBX.IAXheader (Call-Info:answerafter=0 ) Exten=>**2.However until here.s9999 Now when an extension 1001 is dialed. a person can leave a voice message.1. C. Mohammad Abdullah Al-Mumin”Small office PBX using Voice over IP” in ICACT I2-14 FEB.Dial(${ARG1}) Exten=>s.The call is parked and the caller is held.1.2IAX Addheader( Call-Info:answerafter=0) Exten=>s. Implementing IVR. August 2007 [4] Mohammed A Qadeer & Ale Imran.2. 3) One-to-Many Intercom: [Two_way_intercom_GROUP] Exten=>**2.102. [macro-pageext] Exten=>s. Shekhar Prasad . May 2005 [2] Md.1.

China. that is ‘wide explore’ and ‘accurately scan’ hand-in-hand research mode. As a new type of evolutionary algorithm (EA).cn Keming Xie The college of Information Engineering Taiyuan University of Technology Taiyuan. In this process. the evolution is a process that the research region becomes more and more clear. As shown in Figure 1. good or not good. The experimental results show the algorithm is valid and advanced. the exploring of the algorithm is more purposeful and sufficiently and the performance is improved. It extracts the virtues from GA and ES and overcomes their disadvantages. 030006 qyxljl@yahoo. The Population Eentropy When MEA is used to solve an optimal problem. information entropy original winners for superior subpopulations and those with lower score are the winners for temporary subpopulations. Some with lower score are washed out and replaced by new ones scattered at random in solution space to keep the global searching ability of population. Sketch Map for Movement of Populations The optimal process is just similar as the communication process in information theory. P. The former 978-0-7695-3521-0/09 $25. Those with higher score are kept as the Figure 1.1109/ICCET. prematurity is avoided and the population evolves to the global solution. When “learning” starts. P. Since memory and directional learning mechanism are introduced.2009 International Conference on Computer Engineering and Technology A new Mind Evolutionary Algorithm based on Information Entropy Yuxia Qiu The college of Management Science and Engineering Sanxi University of Economic and Finance Taiyuan.2009. That means at the late evolutionary time the convergence will deteriorate. ‘Similartax’ and ‘dissimilation’ operators are presented and monolayer population evolution is improved to multilayer population evolution. INTRODUCTION Mind Evolutionary Algorithm (MEA) simulates the evolutionary process of human being’s mind. Mind Evolutionary Algorithm (MEA). MEA succeeds basic property from traditional EA. individuals are randomly scattered in solution space and their score are calculated respectively. R. Dissimilation operator completes the competition between subpopulations on whole solution space and realizes global optimization. it usually takes algorithms a short time to find a good solution but an opposite long time to find the best solution. In this process. MEA has been successfully applied to solve some practical problems [2. As studied. Keywords.com. the evolution begins with a stochastic population and then the ‘similartax’ operator is used to local research and the ‘dissimilation’ operator is used to guarantee the global ubiquitous . II.Evolutionary computing. Now. For example. China. I. individuals are produced by normal distribution with some variance around each winner and the one with highest score is the new winner replacing the old one in following steps. We try to solve the problem by introducing information entropy and hope to improve other performance of MEA at the same time. 3].edu. Similartax operator completes the competition inside subpopulation on subspace and realizes local optimization.00 © 2009 IEEE DOI 10. R. competition occurs between the winners of subpopulations. THE POPULATION ENTROPY OF MEA A. 030024 kemingxie@tyut. Basic MEA The whole solution space is divided into some subspaces by MEA and ‘similartax’ and ‘dissimilation’ operators are used to realize the evolutionary operation. B. Many scholars take this as their research object and some delightful achievements [4]. Thus the self-adaptive strategy can be realized by building population entropy computing module to estimate the region including global optimal solution. the evolution of a population is a process with entropy reducing and the population entropy can be used to reflecting the evolution state. In this way. Therefore.43 191 . search efficiency has been increased greatly [1].cn Abstract—A new self-adaptive Mind Evolutionary Algorithm based on information entropy is proposed to improve the algorithmic convergence especially in the late evolutionary time.

Then the solution region is plotted into M areas. . And the individual with highest score is the new winner replacing the old one in following steps.2. 2. material calculation follows formula (5). x k −1 is the old (or father) individual in (k − 1) th generation and σ k is the similartax variance. Here a strategy is proposed to accomplish the task. Where [ f min + α i ⋅ λ . the population unlimitedly approach optimal solution and individuals are centralized at it. Three classic numerical optimization test functions are applied. Step3 Similartax operation: new individuals are produced by normal distribution with variance around each winner.Λ . ˆ (4) σ i2 = C ⋅ H P (i ) Where C = 1 . ln M Based on above idea. σ k ) = x k −1 + σ k ⋅ r Where r is a random numeral. M . H P = −∑ pi ln pi i =1 M (1) C. Step6 Output evolutionary result. the probability of an individual th comes from i area can be approximately calculated as following. that means the entropy is smallest. IV. algorithm ends. the information entropy is biggest. z M . • De Jong’s function 1 f ( xi ) = ∑ xi2 i =1 3 ˆ ˆ ˆ H P = −∑ pi ln pi i =1 r xi ∈ [−5. Drew out an individual x from a population and find out which area it belongs to. where i = 1. m (2) ˆ pi = i , i = 1.The number of super subpopulations is NS and let M= NS. formula (4) describes the parameter self-adjust strategy of the new algorithm. with evolutionary operators running.2.12] (3) • Rosenbrock f 2 ( x ) = 100( x12 − x 2 ) 2 + (1 − x1 ) 2 xi ∈ [ −2. The Population Entropy Estimation Accordingly. else repeat step3 and step 4. Step4 Dissimilation operation: realize global optimization. (5) x k = N ( x k −1 .Λ . Definition 1 The population entropy as a measurement for the denseness of individuals in a population is defined as the following formula. subpopulation size and conditions for end. if the population entropy is calculated. In the late time of evolution. M α1 = 0. Suppose the solution region can be cut into M areas. 0 ≤ α i ≤ 1 .2. Accordingly.3. MEA algorithm can be improved by introducing entropy to adjust research region. SIMULATION EXPERIMENT ∑p i =1 M i = 1 .048. Step2 Initialization: scatter individuals composing initial population in whole solution space. If M is the population size and mi is the number of th individuals from i area. Based on the definition of information entropy.2. a new research strategy can be proposed. some with lower score are washed out and replaced by new ones scattered at random in solution space. If the probability that x comes from area zi is pi . Suppose f min and f max respectively describe the minimum and maximum fitness value that MEA algorithm has explored so far and λ = f max − f min . turn to step6. the entropy of a population H P (t ) is impossible to be computed truly before the optimal solution is got but be estimated. the entropy reduction process is described as following: at the beginning of evolution. the population is absolutely stochastic and the least knowledge of optimal solution is known. α i > α i−1 and ε is a small positive number. σ k is calculated according to formula (4). In MEA. SELF-ADAPTIVE MEA BASED ON POPULATION ENTROPY (PEMEA) As analyzed above.Λ . i = 2. marked as z1 . α M +1 = 1 + ε . the population moves to the optimal region and the individuals are more centralized and the entropy reduce accordingly. x k is the new (or son) individual in k th generation.Λ . the estimate of population entropy can be computed. pi ≥ 0 , i = 1. z 2 .048] 192 . stochastic experiment is done.introduces negative entropy into system just like the latter does. r M Hence. f min + α i+1 ⋅ λ ] . Step5 Conditions for end: if the end conditions are filled. i = 1. variance σ is a key factor to control the population’s research zone.Λ . 5. Λ . Take the analysis as base. M . So the self-adjustment of σ can realize self-adjustment the algorithm.12. the PEMEA algorithm steps are described as following: Step1 Set evolutionary parameters: population size. it’s not difficult to estimate the distance between the population and the optimal solution. So. M and ˆ The minimum and maximum value of H P are 0 and ln M . III. Suppose σ i is the similartax variance of i th generation. And the contrast experiment between MEA and PEMEA is done to analyze how the algorithmic performance has been improved. Hence it is uncertain that a individual falls into which arear. Then.

It can be find out from figure 2 and figure 5 that the performance of MEA is precarious because affected by parameter badly but PEMEA overcome the shortcoming. Optimization of Rosenbrock function with PEMEA and MEA. but the speed of the former is apparently faster than the latter with absolute precision and arrive theory optimal solution indeed. 2005. pp. 0 1. 29(2): 308-311 [2] [3] [4] Parameter Population size Sub-population size Maximum generation number variance σ k = C1σ k −1 ˆ σ k = 2 C 2 ⋅ H k (i ) TABLE II. In one word. Lijun Wei. 6 and 7.R. TABLE I.64e59 1. it can be easily figured out that both PEMEA and MEA has good convergence. it can be concluded that the research strategy based on population entropy improved the performance including robust. ACKNOWLEDGMENT This work was supported by a grant from the Specialized Research Fund for the Doctoral Program of Higher Education.2004-18). “Analysis on the Convergence of MEBML Based on Mind Evolution. 355-359. As shown in figure 3.0001. P.5 − xi ∈ [−100. 2000. 838-842. Run the algorithms to optimize function 4-5 with different coefficient C1 to analyze the affect of the coefficient to MEA. According to the analysis on experimental result. 2007. 1162-1165. Chuanlong Wang and Chengyi Shun. CONTROL PARAMETERS OF THE ALGORITHMS MEA N=50 M=10 G=100 PEMEA N=50 M=10 G=100 • V. If the error between the evolutionary result and optimal solution less then 0. otherwise not convergent.” Proc.R. The results are showed by figure1-4.001( x12 + x 2 )]2 The parameter value is set as shown in table 1 and the algorithms are respectively run 100 times to optimize test function 1 to 3. it is accepted that algorithm convergent.” In Proc. it is the off-line performance but not the on-line performance is improved greatly because PEMEA take one best individual as its representative and ignore the average state. 1998.17e7 5. Convergence Analysis of Mind Evolutionary Algorithm Based on Sequence Model. Yan Sun. From table2. CONCLUSIONS The idea of information entropy in information theory is introduced into the MEA operator design and a new algorithm is proposed as PEMEA. the performance of MEA is improved largely in PEMEA. convergence speed and precision of MEA greatly and PEMEA is an efficient and stable optimization algorithm. of 6th International Symposium on Test and Measurement. Of IEEE INES’98. 4..C (No.” Journal of Computer Research and Development. Thereby the advantage of the new algorithm on parameter adjustment is test. Sept. Gang Xie and Keming Xie. June. P. REFERENCES [1] Chenyi Sun.05e17 1 1 1 193 .C(No. 2006112005)and Visiting scholar foundation of Shanxi province. EXPERIMENTAL RESULT Test function De Jong's function 1 Rosenbrock Sharffer’s function 6 algorithm Convergence ratio (%) Average convergence generation number Optimal theory solution Optimal experimental solution MEA 100 PEMEA 100 MEA 100 PEMEA 100 MEA 100 PEMEA 100 16 6 39 13 20 11 Figure 2.100] 2 [1 + 0. pp. pp.5] f 3 ( x ) = 0. “Mind-Evolutionary-based Machine Learning: Framework and the Implementation of Optimization. System Engineering and Electronics.Sharffer’s F6 2 [sin 2 ( x12 + x 2 )1 / 2 − 0. “Multi-model Parallel Tuning for Two Degree-of-Freedom PID Parameters. The experimental result is recorded in table 2. Yuxia Qiu.78e6 0 2. Keming Xie , Yiuxia Qiu.

On-line Performance of PEMEA and MEA with Sharffer’s F6 Figure 7.Figure 3. Off-line Performance of PEMEA and MEA with Rosenbrock function Figure 5. Optimization of Sharffer’s F6 with PEMEA and MEA 194 . Off-line Performance of PEMEA and MEA with Sharffer’s F6 Figure 4. On-line Performance of PEMEA and MEA with Rosenbrock function Figure 6.

Software components can be classified into two main categories according to the functional granularity: the components at program code level and others at application level[1]. etc[2]. First of all. integration and testing. The main advantage of the model is to propel the integration and reuse of legacy software resources by means of transforming currently existing applications into application level components with the wrappers providing standard interfaces. reference model. and then a reference model in terms of the encapsulation structure and specification is presented. which is a situation violating the initial objective of component engineering[3][7].2009 International Conference on Computer Engineering and Technology An Encapsulation Structure and Description Specification for Application Level Software Components Jin Guojie. Yin Baolin State Key Laboratory of Software Development Environment Beihang University. This is currently. II. like the orders management task in a ERP system which is composed by a large number of small code level components working together and relying on each other. so the coupling degree among the components becomes more loose which is more suitable for domain modelling. the component model and interface standard have not been commonly recognized. etc[5][7][8]. In this paper the specialty and characteristic of application level components is first explored. The application level components containing comparatively integral granularity of business function are more suitable for a larger scale of application. Although application level components show advantages in many aspects discussed above. its reuse degree in a new business environment is significantly low.1109/ICCET. functions. The inherent deficiency of program level components is analyzed followed by the observation of the unique requirements of application level components. while there is still lack of reference model and description specification in this field. which are difficult to accommodate relatively integral business feature. In current literatures only a few related concept keywords can be found. including CORBA. The novel component model named Application Level Component(ALC) is proposed in this paper. School of Computer Science 100083 Beijing. EJB. due to which the coupling degree between the components and the runtime environments is hard to lower.2009. Therefore. the component granularity is limited to fundamental software elements like code segments. yin}@nlsde. thereby limiting the reuse ability of the component. application level components can consist relatively more complete function granularity. like “software level components”. Keywords-software component. specification I. we established several demonstrations for real practice to manifest the conclusion that the model applies to a large scope of domain engineering and can be made to be the basis of application level component-based design. a general reference model should be built to model the components falling into this category. Among all of these standards. for an unique component. then a novel reference model is proposed. there is still lack of research attention towards them.buaa. In the end. Traditional software components research mainly aims to the previous category. COM/COM+.edu. SURVEY ABOUT APPLICATION LEVEL COMPONENTS As a specific subclass of software components. application level components should meet the requirements that all software components have in common. which involves a . thereby causing the absence of a common basis for component analyzing and evaluating.8 195 degree among the code level components. and consisting an integral item of business function [7][8]. (2) The program level components are developed mainly by skillful programmers with a specified type of programming language. as we observed. INTRODUCTION Component technology is of growing importance in software engineering and is considered to be an critical measure to enhance the reuse ability of software modules. resulting in the formation of a variety of software standards in business world. and the dependency between the components and their runtime environments can reasonably be lowered to a certain extent. Being compared with the program level components. This means that there is high coupling 978-0-7695-3521-0/09 $25. The generalization and applicability of the model are evaluated by real cases. China {jinguojie. However. It is defined as a software module at the granularity of a standalone executable software application. The long term application practice indicated that the reuse ability of such program level components is limited to a low level because of their inherent deficiency as follows: (1) The component granularity is too fine to undertake a complete item of business function. a bottleneck with no effective solution. and objects/classes. As technology and tools evolving rapidly. application level component.00 © 2009 IEEE DOI 10. which has a strong dependency with the developing environment and operating platform. “components of complete applications”.cn Abstract—Traditional research of software components mainly aims to components at the level of program and source code. there are few components developed today may remain reusable after a certain number of years [4].

several run units can be encapsulated in one ALC and be routed according to a specified property named Sequence. an Excel spreadsheet containing records of staff payment can be parsed and loaded by the main program of Excel from Microsoft Office. (5) the component interfaces exposed to callers.doc) are all of this category. a survey of the ALC is done by comparing them with traditional software components. like a primitive “Hello world” printing routine. and the main components are as follows: (1) the applications and forms to be encapsulated.structural model. III. Conclusions can be drawn from the comparison that there are essential differences between traditional components and ALCs according to many factors from the internal structure to interface requirement. Considering more complicated business tasks involving multiple applications and forms.xls) and Word documents(. (4) a data area maintaining runtime data environment. e. so there must be provided with specific instructions to control the startup and termination of ALC though its interface. (2) An interprocess mechanism for communicating among ALCs is needed. which contain business information of great use. To make the model more applicable and acceptable. Applications and Forms To encapsulate currently existing software and systems. Therefore.g. like Microsoft Office or an OA/ERP system. an novel ALC oriented reference model was developed and is now still being improved. After manipulation. (3) The invocation and assembling style of the ALC has a close link with the business requirements which it undertakes.. By contrast. For example. a conclusion came out after an intensive study that numerous documents and electronic forms in business practice have been left up for years and years. applications can be on any scale from single executable program files. 1. A. that is. and the unique characteristics of ALCs are summarized as follows: (1) The core issue of the ALC’s model is to provide the ability of encapsulating and reusing current existing software and systems. where the form is mainly responsible for visual representation of the business data. a basic run unit can be defined as an application plus a corresponding form. they can be treated as joint entities of business tools. Other necessary instructions besides them are supplied in the same way. For a more flexible usage. Forms are usually used as a companion of a dedicated type of application and they work collaboratively to perform business tasks. The run units are working in an collaborative way to accomplish the business functions. The term Forms is used to stand for all sorts of electronic files containing information about visual layouts and manual interactions for a given set of business content. they have many unique characteristics which should be deliberately took into account while designing the reference model. an interface specification. the ALCs are executable programs whose life circle starts from the creation of the corresponding process instance and ends when the process halts. As the ALCs are tend to be formated at higher granularity with more complete domain logic. From the ALC point of view. traditional software components are primarily defined as functions or objects. The design target of the model is to represent a set of necessary description information about ALCs and to facilitate the reuse of legacy software systems through the integration and encapsulation of business elements such as applications and forms with a uniform interface standard. As standalone executable programs. the user’s operation result is saved in the spreadsheet and can then be extracted out. 196 . advanced flow structures like choices and loops can be used to indicate the execution sequence of the run units. the reuse of numerous functional modules inside these systems is a key path to reduce duplicate coding and investment waste. Furthermore. and the application handles the parsing and loading of the form as well as the interactions with the operators. REFERENCE MODEL Form B Form C Form A Begin Application A Data Area Component Parser Component Description File Application B Application C Application Wrapper End Description Interface Data Interface Controlment Interface Figure 1. so only function call and object request are needed for the communicating of program level components. (2) a description file representing description information. Microsoft Excel spreadsheets(. and a description mechanism to represent necessary information about the component’s function and usage[1][4]. Structural Model of Application Level Component Considering the above survey about the ALC’s characteristics and properties. the property Applications is certainly to be a significant part of the model. The structure of the reference model is shown in Fig. It is a essential difference between traditional component and ALC. As large number of legacy software and systems in business practice have to be abandoned only because they fail to keep following the changeful external requirements. to complex software suites. the ALCs within a system should communicate with each other in a way that supporting interprocess data transferring. so the records can be shown on the screen display and then be viewed and modified through the graphical interface. While traditional components are defined and assembled in terms of program codes. (3) an parser program administrating the execution.

i. Here UUID protocol can be combined to meet the demand. data name in the form}. extract data from the processed form and output them with a standard style(-read). a specific module is needed to take charge of the scheduling of the run units according to the description file.brief information about usage Type . Type}. all shown in Table1. the parser reads out data from the form and saves them as intermediate working data which is to be processed in the next run unit. Component Parser To administrate the run units in an ALC. a type of wrapper technology[8] can be used to provide an external layer called “application wrapper”. 1). and Type is the data type including {string. recordset}.the executable programs to be encapsulated Forms . float. like application A and B in Fig. Category Descriptive properties CONTENT OF THE COMPONENT DESCRIPTION FILE Property Name & Usage ID . It gives a strong hint for determining the appropriate running environment of the component. schedule all run units according to the logic expressions of property Sequence. the form written in with current component data is delivered as a command parameter to the application for manipulating. a type of applications with visual forms and dialogs are usually to be manipulated by staff. (3) as the user ends operating. read in the input data as its original working data. “recordset” type is designated to abstract a typical set of relational data records in business practice. Component Description File The description file is a repository for a series of properties representing the necessary information about the component. Meanwhile. This is the element called component parser. (5) when the component ends up. exec. (2) Name & Description: properties that simply provide the textuary name and brief information about usage. Therefore. export the working results which can be manipulated by the caller. They describe the constraint conditions to be judged when each form is submitted or when the component ends up. the parser evaluate the expressions of Constraint Conditions for Form Data after each run unit step. In this way. where Name is the data field’s identification. Forms and Sequence discussed above. some other properties should also be included. Other materials like versions and created time can also be put here. Data-related properties 197 . so the component itself can learn the erroneous situation timely and do error handling or emit exceptions to the caller. The expressions are composed by data fields that representing the component’s current state. the calling interface of the applications can be summarized as a collection of command params {-write.To facilitate the encapsulation of existing applications. -read}. a broad scale of legacy systems can be transformed into standard ALCs without touching the inside. For the applications and legacy systems incompatible with the interface standard. The descriptive properties are designed to indicate some generic description information as follows: (1) ID: an unique identity code assigned to each component to ensure the uniqueness in designing and running environment. It works as the call entry of the ALC and conducts the running procedure during the component’s whole life circle as follows: (1) as the component is launching.the execution route for multiple run units Forms Mapping . datetime. (4) Forms Mapping: indicates the mapping relation between the data fields exposed in the component interface and the data within the inside forms.whether the component is UI-driven Applications . which describes the rules about how to insert the component data into each forms and extract them vice versa. (2) load and display the working form for browsing and modification(exec). 1. To enhance the ALC’s power of data processing. and then can be reused in new business environments. It is defined as a 3tuple{ component data name. Each data field is designed as a 2-tuple{Name.a unique identity code Name .the conditions that should be satisfied for the input and output data of the component Input & Output Data Fields .the conditions that should be satisfied after each time of form manipulation Constraint Conditions for Component Data . which dominates the program internally and providing external interface that meets demands (shown as application C and its wrapper in Fig. and Constraint Conditions for Component Data before exit. When the application ends. (2) during runtime.. B. the parse output the current component data and exits. Beside the items of Applications. while others without any user interface are more likely to execute automatically. a common interface standard should be established for all the applications to be encapsulated. (5) Constraint Conditions for Form Data & Component Data: a set of logic expressions to keep validity during runtime. (3) handle runtime communication with the caller. Considering the simplest case. (4) to ensure validity. TABLE I. the common tasks that the applications carried out in each run unit can be described as: (1) fill in the form template with initial business data (-write). The properties in the description file can be classified into to main categories: the descriptive properties and the data-related properties. components providing dialogs for business data browsing and maintaining naturally fall into the first category. For example.e.textuary name Description .data coupling between the component and the inside forms Constraint Conditions for Form Data .data fields that the component receives and outputs (3) Type: indicates whether the component has interactive user interface. other standard business-independent tasks like data I/O and interface handling can all be undertook by this module. In each step. form name. If there is no satisfied expression for next step. and the components doing job for batch printing or email sending usually refer to the latter. int. Any program in accordance with this standard can be directly integrated into a component. C. The data-related properties are used to describe the input/output data fields of the component.the documents/spreadsheets to be encapsulated Sequence .

i. It is formed as a collection of expressions like <Source [condition] => Destination>. Table 3 shows the description file of the “loan forms filling component” in the newly developed Housing Fund Reviewing System(HFRS). which are necessary for use.doc). Furthermore. where: (1) Source is the current run unit represented as a pair of “Application:Form”. the descriptive instructions are mainly used in system assembling stage. V. When the component ends running.The above tasks of the component parser are all based on the content of the description file which are absolutely business-independent. the values kept in the data area are definitely the final work result. so the parser itself is kept away from any business logic and gains a common sense in all ALCs. thus giving the component a “black-box” feature. including a loan contract form (Contract. The condition is composed by the component data fields which can be evaluated at run time. can enable the operators to save multiple versions of work effort at will. Data Area Data area is a region allocated in memory. (1) Descriptive instructions: to query the basic description properties of the component. This feature. (3) Destination is the run unit to be executed next when the condition is satisfied. a data crash after an unexpected system halt. (3) Controlling Instructions: to provide methods for controlling startup and termination of the component. Other description elements include the classes of Interface describing the collection of interface instructions. and revert to one in some situation. Component Interfaces The component interfaces are exclusively the only entry exposed to the users for calling and controlling. for the system developer to identify the usage of the component. a credit rating report (Credit. there had been a long time for the staff to manually process the 198 . Each of the structural elements and interfaces in the model is abstracted as a class in the diagram. while the data-related and controlling instructions are mainly used in system running stage to invoke the proper execution behavior of the component. Each item in the region is a 2-tuple {DataName. DataValue}. During runtime. InputData and OutputData describing the input/output data fields. e.g. which simply contains the data names and values in an intuitionistic format. carried out by the component parser as a standard function. It stores all the intermediate working data between multiple run units. (2) Condition is a logic expression indicating the routing path and flow condition between every two run units. and classes of FormConstraintCondition and Input/Output DataConstraintCondition describing various types of conditions in the model. a particular feature is assigned to the model for dumping the intermediate running state to a disk file which can be loaded from to resume the previous state. Classified by functional type. Data area is created and maintained by the parser. As an example. FormsMapping describing the data coupling between the forms and the component. In this way. TABLE II. The data file is assigned as the parameters of the instructions --load-data and --save-data. the internal details can all be hidden to the outside.doc). the parser allocates a piece of the memory and fills it with all the original input data. The class Sequence is used to describe the execution steps of run units. As before the development of HFRS. an ALC is abstracted as a core class ApplicationLevelComponent. CONTENT OF ALC COMPONENT INTERFACE Instruction Name & Parameters --get-id --get-name --get-description --get-type --get-input-data / --get-output-data --load-data <data_filename> --save-data <data_filename> --exec <input_datafile> <output_datafile> --terminate --dump-image <image_filename> --load-image <image_filename> Figure 2..e. The interfaces of the ALC model are defined as a set of instructions supporting the minimum collection of calling and integration requirements. the output of each run unit is took over by the parser and then be updated into the data area. E. and a house mortgage evaluating report (Eval. there are three categories of instructions shown in Table 2. 2. IV. where DataName is corresponding to a component data field’s name and DataValue contains the current value. The component contains three forms to be filled in by the fund applicants. All these classes and their relations in the diagram form an explicit description specification of the ALC reference model. As a whole. SPECIFICATION OF THE REFERENCE MODEL An concrete description specification of the ALC model is presented as the UML class diagram shown in Fig.. Here a platform-independent XML file format is used to facilitate data exchange. (2) Data-related instructions: to provide a uniform rule to handle data I/O between the component and its caller. Structural Model of Application Level Component Generally. EVALUATION Type Descriptive instructions Data-related instructions Controlling Instructions The evaluation is done by the development of several typical business systems fully assembled by ALC components built with the method proposed here. D.doc). As the component launches.

5 man-month.doc. Proc. future development can be further accelerated. Wang LF.doc => End FormConstraintConditions: [Contract.. Mehandjiska. while with our method was done in 25 days (including the development of several wrappers for about 10 days). M.doc]. P. P. of which 75% are UI-driven components constructed with the wrapper for Microsoft word (shown in Fig.exe: Credit. 23(9) ID: 5588ce2e-0808-11dc-9a1b-00a0c90a8199 Name: Loan Forms Filling Component Description: ALC for filling loan forms by applicants(V1. 5/11/2008) Type: Manual InputData: OutputData: string Name. The model is simple and flexible to use by providing a standard framework clearly figuring out the business specific properties to be focused on. is constructed by totally 24 ALCs.. “A Reference Model for Reusable Components Description”. and Roshchin. msword-wrapper. Wu L. “Storing and Retrieving Software Components: A Component Description Manager”. M.doc]. Ribeiro. string Sex. the development of HFRS using traditional method would cost approximately a minimum workload of 1. TABLE III.html Redolfi. Espindola.exe: Eval. and others are components towards automatical business tasks including document printing and email sending.M. This is just the situation that the ALC model applies to. msword-wrapper. CONCLUSION AND FUTURE WORK Figure 3. It got a efficiency improvement by 45%. mswordwrapper. .sun. shown in Table 3 as “msword-wrapper. R. P. as a typical medium scale system popular in business domain. “Component-Based Software with Beans and ActiveX”.exe”. Bastos. The HFRS.doc]. AppliciantName. Wong. ….doc => msword-wrapper. HouseSN = [Eval. http://java. which is a way to promote the reuse level of software modules. and M.exe: Credit.doc]. 2000 Liu Y. AppliciantName.. Wang YH. 32nd EUROMICRO Conference on Software Engineering and Advanced Applications. K.. 2002. 16(8) Yang L.doc.application using emails and some desktop office suites like Microsoft Word.doc. AppliciantAddress. S. and Ghosh.doc . string HouseSN.doc FormsMapping: Name = [Contract. 3 and 4).doc]. Software Engineering Conference.. Eval. USA. G. which invokes and communicates with the Word application through the opening interface supported by Microsoft Office via COM/OLE[2]. Given the wrapper implemented in a short period. datetime AppDate. … InputDataConstraintConditions: OutputDataConstraintConditions: Name != NULL && SN != NULL && HouseSN != NULL && …… [4] [5] [6] [7] [8] 199 . Mini-micro System. Considering that the wrappers are reusable resources commonly used in business domain.0. “Combining Mechanism of a Domain Component-Based Architecture Based on Separate Software Degree Components”. A. HouseSN. Address = [Contract. Gao Y. so a collection of standard business forms had been accumulated then.. International Conference on Commercial-off-the-Shelf (COTS)-Based Software Systems. “A Component Based Geo-Workflow Framework: A Discussion on Methodological Issues”. Texas Instruments Software. AppliciantName == [Eval. Since the forms are familiar to the staff. all the components are developed simply by writing distinct description files.exe:Contract.exe:Contract. A wrapper for Microsoft Word is then be developed. UI Loan Contract Form Figure 4.AppliciantName == [Credit. Cristal. E. REFERENCES [1] [2] [3] Short. string Phone. AppliciantSex.doc. string HouseAddress.J. where the model’s applicability and validity were preliminary evaluated.B. 2005. “Component Based Development and Object Modelling”. Spagnoli. 1997.com/javastation/whitepapers/javabeans/javabean_ch1.B. Sudha Ponnusamy. they are desired to be reused in the new system of HFRS. Montgomery.doc]. 2005 Geisterfer. as well as revenue administration systems. and E. SN. Sex = [Credit.J. of 38th Annual Hawaii International Conference on System Sciences. and the forms are encapsulated and transformed into ALCs using wrapper technology at low cost.AppliciantName. D. As a statistic. L.exe:Eval. A novel method is proposed to encapsulate and reuse current existing software and system resources.doc]. Applications: msword-wrapper. VI. 2006 Meling. UI of House Mortgage Evaluating Report The reference model and description specification for application level components have provided the expected advantages in evaluation. Credit.doc => msword-wrapper. Journal of Software. DESCRIPTION FILE OF LOAN FORMS FILLING COMPONENT The method has been employed in several typical business applications like loans reviewing systems. ….doc]. A wider evaluation will be made involving a larger scale and scope of systems for continuously improving the model.doc]. professional titles declaring and management systems. string SN. 2006 Graubmann. SN = [Credit. SunSoft. C. Hemesath.M. R.exe Forms: Contract. “Software component specification : a study in perspective of component selection and reuse”.. string Address. … Sequence: Begin => msword-wrapper. Name = [Credit. “Semantic Annotation of Software Components”..

which uses the integral PCA to detect fault and block contribution and variables contribution to diagnose fault. ShenYang University of Technology.However. their method can handle high dimensional and correlated process variables. The confidence limit of SPE statistics in the integral PCA model is used to detect fault and the block contribution and variable contribution plots are applied to diagnose the fault. Because of the favorable features of PCA. In section 3. ShenYang. PCA is powerful in fault detection . the data driven method. In this paper. The simulations on the Tennessee Eastman process show that the proposed method can not only detect fault quickly .the data driven method. especially when the process is complex and the number of monitoring variables is large. methods based on knowledge and methods based on data-driven. The confidence limits of Hotelling T2 and SPE statistics in those subspaces can be used to detect fault and the variable contribution plot was introduced for fault diagnosis. Owing to its no need to know much about the process mechanism and exact process model .however.1109/ICCET. it has difficulties in diagnosing fault correctly in complex process. the fault diagnosis based on variable contribution plot in PCA method may make mistakes. Introduction On-line monitoring of continuous process performance and fault diagnosis are extremely important for plant safety and good product quality. fault diagnosis.thus the fault diagnosis ability has been improved. The rest of the paper is organized as follows.but also find the fault location exactly. Keywords: fault detection. the multiblock principal component analysis (MBPCA) is applied for fault detection and diagnosis in continuous process. The main point of the approach is to use PCA to compress normal process data and extract information by projecting the data onto a low dimensional score space. such as high process nonlinearity. first judge which block the fault locates in and then determine the fault position in that block . When a fault is detected.2009 International Conference on Computer Engineering and Technology Fault Detection and Diagnosis of Continuous Process Based on Multiblock Principal Component Analysis Libo Bie.2009.cn Abstract The fault detection and diagnosis of continuous process is very important for the production safety and product quality. xdwang@sut. Xiangdong Wang School of Information Science and Engineering. high dimensionality. The key idea they proposed decomposes the large-scale system into function-independent block or site-independent block.com. a novel continuous process fault detection and diagnosis technique based on MBPCA and block contribution[5] is proposed.00 © 2009 IEEE DOI 10. the concept of PCA is introduced. Simulation results show these methods are simple and powerful[2]. which only needs 978-0-7695-3521-0/09 $25. In the last decade. typically the principal component analysis (PCA) has attracted much attention by chemical researchers for monitoring process. the approach of MBPCA. 110178. and the complexity of a process can make it very difficult to develop. whereas for the knowledge method. As an alternative.researchers[3. historical data of normal operation condition has attracted much interest by chemical engineers. many factors. In section 2.107 200 .cn. which could be classified into three categories: methods based on principle model. The method is demonstrated on the Tennessee Eastman process in section 4. block contribution 1. One of the data-driven methods which obtains widespread availability is principal component analysis(PCA)[1]. To enhance the PCA model’s explanation and fault diagnosis ability . principal component analysis.4] propose the approach of multiblock principal component analysis(MBPCA) or hierarchical principal component analysis(HPCA). the lack of knowledge may confine its application. combined with block contribution and variable contribution is developed to monitor process and diagnose fault. multiblock principal component analysis.edu. many approaches have been applied to this field. China bielibo81758@yahoo. In this paper. For the model method.

On the other hand . The data matrix X(n × m).If the number is large. the formula (5) can be rewritten as the following form: (7) X=TkPkT+ TePeT where Tk(n × k) and Pk(m × k) are the principal component score and loading matrices. usually describing the noise information. the confidence limit Qlim for the SPE statistic can be calculated using the following formulas. is defined by (8) SPE=eeT=(x. that will not effectively capture the information of the original data set and may Having established a PCA model based on normal process data.2. λi is the eigenvalue according to reducing order. The information is measured by variances and PCA actually on the eigenvalue decomposition of covariance matrix C[6]. Te(n × (m-k )) and Pe(m × (m-k)) are the residual score and loading matrices respectively. x . the model will be overparameterized and include noise. Thus PCA can transform the matrix X into the following form: X=t1p1 +t2p2 + … T T +tmpmT= ∑t i =1 m i pi T (3) where pi is the same as the above definition and named as principal component loading vector and ti is the corresponding score vector. SPE can’t exceed the confidence limit Qlim under the normal condition.x )T=x (I. PCA is intended to construct the independent variables of less numbers to reflect the information of the original data matrix as more as possible. ( ∑λ i =1 k i ∑λ i =1 m i ) × 100% >65%--90% (6) X T X C= n − 1 (1) Cpi=λi pi (2) where C defines the covariance matrix of X . Supposed k principal components(k<m) are selected. ti=Xpi (4) PCA is aimed to explain the main variation of the data matrix X by choosing the proper anterior k principal component score vectors. Many approaches have been proposed for selecting the number of the principal components to be retained in a PCA model[7].Pk PkT) xT and (9) t=xPk x =tPkT (10) (11) e=x.3) 201 . Assuming the normal distribution of x. which judges that the cumulative percent variance captured by the first k principal components is bigger than the predetermined limit. the squared prediction error(SPE). the formula(8) can be rewritten as (12) SPE=eeT= xPePeT xT According to the determinant standard. otherwise the abnormal state may appear in the process. prediction value. The number of the principal components retained in the PCA model is a critical issue in fault detect . Principal Component Analysis(PCA) As a data-driven method. For concision. This comparison can be carried out in the principal component space or the residual space.if the number is small. In this paper. the simple but widely used cumulative percent variance(CPV) method is proposed . Q lim θ i = θ 1 ⎡1 + ⎢ ⎣ j = k +1 Ca 2 θ 2 h0 2 θ1 i + θ 2 h0 ( h0 −1) θ12 ⎤ h0 ⎥ ⎦ (13) 1 = ∑ m λ 3θ 2 2 j h0 = 1 − 2 θ 1θ 3 (i=1. corresponding the T2 statistic or the SPE statistic. PCA has been used successfully for detecting faults in handling highly correlated variables. As follow.2. As PePeT= I. Suppose a new process sample x is collected. which indicates the amount by which a sample deviates from the model.Pk PkT. which are linear combination of the original m variables. which has been collected under normal operation and standardized to zero mean and unit variance comprises n samples and m process variables. future behavior can be referenced against this “in-control” model.x )(x . increase the difficulty of analysis and diagnosis. e. The essence of PCA is projecting the process data onto the principal component space of lower dimensionality to compress data and extract information. Matrix X can be reconstructed as X=t1p1 +t2p2 + … T T +tkpkT+E= ∑t p i i =1 k T i +E (5) where E(n × m)is residual. Fault detection can be carried out by confidence limit of SPE or T2 statistics and in simple process fault diagnosis can be realized by variable contribution plot. I are the score . which is the difference between original data and data reconstruction with the anterior k principal components. prediction error and unit matrix respectively. pi represents the orthogonal and unit eigenvector of the covariance matrix C.x where t.

Qin. Fault Detection and Diagnosis Based on MBPCA and Block Contribution Multiblock principal component analysis (MBPCA) or the hierarchical principal component analysis (HPCA) divides the large-scale process into several blocks to enhance the model’s ability of explaining and diagnosing. significant contributions may appear across several of them and the variables contribution plots method can’t determine the exact location of the fault. separate the data vector into B blocks and the bth block is xb (1 × mb). Valle et al [5] approved the loadings can also be calculated from the regular PCA algorithm and pointed further that the equivalence between multiblock analysis algorithms and regular PCA indicated that the scores and loadings in the MBPCA can be obtained by simply grouping the scores and loadings in the regular PCA. namely Λ=diag{λk+1. As defined in [5]. it has difficulties in complex process. which the bth block contains mb variables. Ca is the standard normal deviate corresponding to the upper a percentile. 3. S. pretreat and build the regular PCA model of all variables. it can approve that SPEb is distributed approximately as gbχ2(hb). Valle et al also proposed the concept of block contribution to assist fault diagnosis. each variable will have a unique impact on the SPE and T2. The larger the contribution of an individual variable. 5 Plot the variable contribution of that block and localize the root cause of the fault with process knowledge. prior knowledge can be used to divide the m variables into B blocks. where (17) gbspe =tr{(PebTPebΛ)2}/ tr{PebTPebΛ} hbspe =(tr{PebTPebΛ})2/ tr{(PebTPebΛ)2} (18) and Λ is the diagonal matrix composed of the residual eigenvalues. but can’t be assured what place the fault appears in the process. In this section. S. Pk1 : Pkb : PkB Pe1 : Peb : PeB Pk= P e= (15) 3. monitor the process using the SPE in procedure 1 and a fault is detected if SPE is out of limit continuously. Loading matrix Pk and Pe can also be parsed as follows. the more likely it is that the variable is the source of the excursion. fault detection and diagnosis will be accomplished by the MBPCA and block contribution. Westerhuis JA et al [8] showed the scores in the MBPCA can be calculated directly from the regular PCA. S.3 Variable contribution in a block The jth variable in the bth block is defined as SPEb.x ij)2 (14) Although variables contribution plots are effective to diagnose fault in simple process. Therefore the confidence limit for SPEb is δb=gbspeχa2(hbspe) for a given confidence 1-a. 3. Given a new sample x. 3. That is the task of the fault diagnosis and can be realized by the variables contribution plots in the simple process. 2 Block the scores and loadings acquired by the procedure 1 according to process knowledge and calculate the confidence limit of block SPE. calculate the block SPE.j. When monitoring a process. 3 Collect a new sample. the block SPE of the bth block is (16) SPEb=ebebT= xbPebPebT Peb PebT xbT Note that PebPebT is not the idempotent matrix. 1 Collect process data under normal condition. Qin. The next section is attempted to resolve this problem by using the concept of block contribution and multiblock principal component analysis(MBPCA). When SPE of the new sample exceeds the confidence limit of the PCA model . When highly correlated variables are monitored. the abnormal situation may occur and can be judged. … λm}.where λi is the eigenvalue of the covariance matrix C. The contribution of the SPE of the jth process variable at the ith observations is as: Qij=eij=(xij.1 Blocking Suppose the regular PCA mentioned in section 2 has been established to get the principal component (m × k) and residual loading matrix loading matrix Pk Pe(m × (m-k)). The block whose SPE is out of limit severely can be considered as the source of fault or mainly affected by the fault. 4 Once a fault has been detected.2 Block SPE statistics Similar as equation (12).j)2 (19) 202 . The idea behind contribution plots is simple.j=(xb. Since SPEb is a quadratic form of a multivariate normal vector x. where Pkb (mb × k) and Peb (mb × (m-k)) are the principal component loading and residual loading matrix of the bth block respectively. The basic procedures can be summarized as follows.x b.

22 continuous process measurements and 19 composition measurements.1. The fault 1 A/C composition ratio in feed stream 4 with a step change was introduced at the 5th hour and concluded at the 5. 一. The SPE chart under fault 1 is shown in Fig 2. Material 二 Reactor 三 Separator 1 A feed 6 Feed rate 11 Temperature 2 D feed 7 Pressure 12 Level 3 E feed 8 Level 13 Pressure 4 A and C feed 9 Temperature 14 Underflow 21CW temperature 四. we applied the proposed method to this case. namely the material block. If we only consider the variable which has the maximum contribution as the excursion of the fault. Tennessee Eastman process. Collect data in the normal operation condition during the 25 hours simulation time. 800 700 600 500 SE P 400 300 200 100 99%confidence limit 0 0 50 100 150 200 Samples 250 300 350 400 25 Fig . In fact.3. variables contribution plot of SPE at sample 105 is present at Fig 3. As the plant is open loop and unstable. The process has 12 manipulated variables. Tennessee Eastman process In this paper. a product condenser. detecting the fault quickly. It displays there are 3 variables contribute most for the SPE. Variable contribution plot at sample 105 for fault 1 To resolve the problem. the process consists of five major units: a reactor. which was developed by Downs and Vogel [9].j by the PCA model. variable 20(compressor power). SPE chart for fault 1 20 S E ot ibt nr t P c n uio ae r 15 10 5 0 0 5 10 Variables 15 20 25 Fig. Then we draw the variable 203 . it will be believed that the variable 21 is the source of fault or mostly affected by the fault. as shown below.87% of the variation in the reference set. 4. especially in the complex process. As shown in Fig 1.Stripper 五. After a fault has been detected.where xb. the reactor block. The difference of contribution among them is not obvious and it is difficult to determine which is mostly affected by the fault. To compare the traditional PCA with the proposed method for fault diagnosis. which generates 500 samples as the reference set. Fig . the separator block. a vapor-liquid separator. many control schemes have been proposed and in this paper we used the control scheme in [10]..j is the jth variable of the bth block and x b. Case Study The proposed approaches are applied to the well-known benchmark process. Then the first abnormal case is considered and the process is simulated for 20 hours.357 which is based on the 99% confidence limit.Compressor 15 Level 5 Recycle flow 16 Pressure 10 Purge rate 17 Underflow 20 Compressor power 18 Temperature 22 CW temperature 19 Steam flow Calculate the control limit of block SPE statistics under the 99% confidence limit respectively. variable 21(reactor cooling water inlet temperature). 22 continuous variables are selected for monitoring and the sampling interval is set to 3 minutes.5th hour. These 22 variables are grouped into five blocks according to their locations in the plant. a recycle compressor and a product stripper. Ten principal components are selected. Control limit of SPE statistics is 16. we first draw the block contribution of SPE of each block and determine the most likely block containing fault by comparing their ultralimit. the fault 1 is located in the material section.2 . namely the fault 1 is introduced between the 100th and the 110th sample. So the fault diagnosis based only on the variable contribution plot can’t determine correctly the position of the fault. the stripper block and the compressor block.j is the prediction value of xb. which are variable 11(separator temperature). which captures 69. Making such an decision will lead to check around the compressor and obviously make a mistake. The process is divided into five blocks. The figure shows SPE oversteps the 99% confidence limit at sample 105.

5th hour. A multiblock partial least squares algorithm for investigating complex chemical systems[J]. 1991. B. [4] Ra nnar. 40. so the fault is produced through the variable 1 or affects it most.41:73-81 [5] S. MacGregor JF. S. pp. Kourti T. and J. J. Computers and chemical Engineering. Analysis of multiblock and hierarchical PCA and PLS models. Fig 8 shows variable 9(reactor temperature) is affected most by the fault. Ind.6.R.8. First build the PCA model of all variables to detect fault and then determine which block the fault locates in by the block contribution. The variable contribution plot of SPE at sample 105 in block 1 is shown in Fig. and M. User’s Guide to Principal Component. Proc..A. 2001. Block SPE chart for fault 2 140 120 SEo r uo r t P cni t na t bi e 100 80 60 40 20 0 1 2 3 Variables in bloc k 2 4 5 Fig. Valle. As same as the case 1. [6] Jincheng fan et. the operator can easily confirms the fault is produced through the abnormal variable 4(A and C feed). SPE chart for fault 2 204 . 250 200 150 SE P 100 50 99% c onfidenc e limit 0 0 50 100 150 200 Samples 250 300 350 400 Fig. 10. S. 4 4 2 0 1 2 3 Variables in bloc k 1 References [1] Jackson J E.and Wold. 12: 301–321.5.. Cont. 1988 3:3-20. J. “On unifying multi-block analysis with applications to decentralized process monitoring. thus determining the location of fault exactly.Chemometr. vol. Simulation on the TE process shows the effectivity of this method. Variable contribution of block 2 at sample 102 for fault 2 5.Adaptive batch monitoring using hierarchical PCA[J]. thus finding the cause of fault. T. [9] Downs.Chemometrics 1998. Qin. pp.. Eng. Fig 7 is the plot of block SPE and it displays fault 2 is not serious as fault 1. Vol.al Data Analysis Science publication [7] W.” J. A plant-wide industrial process control problem[J].F.Chem. et al.. 15.” J. S. 1993.Syst 1998.. 2000. [2] Gang Chen and ThomasJ. Qin.. Block 1 800 600 100 400 50 200 0 0 0 50 100 150 Block 2 150 Block 3 20 15 10 5 0 0 200 Block 4 20 15 10 5 0 0 200 400 400 200 400 0 0 200 400 Fig. [8] Westerhuis JA. It is apparent that the most contribution ratio for SPE is the variable 1(A feed). L. “Recursive PCA for adaptive process monitoring.F.5. Piovoso. the empirical operator is inclined to suspect the reactor cooling water inlet temperature has an abnormal change. Valle. vol. With process knowledge. 17(3):245255.E. The SPE for fault 2 in Fig 6 shows the out-of-limit is at sample 102. E.Block 1 Block 2 150 100 50 0 20 15 10 5 0 200 Block 5 20 15 10 5 400 0 0 Block 3 contribution plot of that block and combine the process knowledge to diagnose fault. Finally diagnose the fault according to the variable contribution of that block combined with the process knowledge. Variable contribution of block 1 at sample 105 for fault 1 The other abnormal case is tested for the fault 2 reactor cooing water inlet with a step change.. we apply the MBPCA to detect and diagnose fault. Li. Block SPE chart for fault 1 12 10 S cnb i n t P or u r e E ti to a 8 6 In this paper.J. pp. 715–742.S. introducing the fault at 5th hour and concluding at 5. 471–486.Lab. H. The trend in block 2 appears strongest and is most likely the site of fault.4. 488 Fig. Res. J. [10] Self-Optimizing control of a large-scale plant:the Tennessee Eastman process Larsson. Fig 4 is the plot of block SPE under fault 1. Yue. and Vogel. Chemometrics Intell. the process is simulated for 20 hours. detecting fault timely. It shows that the fault 1 is serious causing all the blocks having obvious out-of-limit and block 1(material block) is most likely the source of the fault as it has the maximum ultralimit. MacGregor.7. Conclusions 0 200 400 0 200 Block 4 400 0 200 Block 5 400 150 400 100 300 200 50 100 0 0 200 400 0 0 200 400 Fig. and Kowalski. New York: Wiley-Inter-Science. J... Chemometrics. Having knowing the process knowledge that the A/C ratio should be kept at a fixed value and the great excursion of variable 1 just compensates the change of A/C ratio in stream 4.McAvoy Predictive On-line monitoring of continuous processes [3] Wangen.

of Computer Science & Engineering Shahjalal University of Science & Technology E-mail: tanvir-cse@sust. While the JPDA provides access to information not normally available in standard Java applications. al. al. It is a powerful autonomous system for heterogeneous environment maintaining its portability since Java Virtual Machine (JVM) is not modified. such as check pointing and recovery for fault tolerance purpose. it also limits access in other areas. When it serializes a thread it maintains all information about local variables. operand stack. So our goal is to develop a complete framework for thread migration. receiving agent. Md. Fuad et. JAVA PLATFORM DEBUGGING ARCHITECTURE(JPDA) A debugger developer may hook into JPDA at any layer. It uses an artificial program counter. Sekiguchi et. It can suspend the thread only inside run method. When migration takes place a NotifyMigration exception is generated and is handled by try – catch block. Tamim Shahriar. The focus in this project is on the migration of the execution state instead of focusing on transfer of objects since this facility is provided by the JAVA’s transport protocol. II. Dorman [8] also uses JPDA for state capture and restoration. Keywords. variable modification.g.2009 International Conference on Computer Engineering and Technology Strong Thread Migration in Hetereogeneous Environment Khandakar Entenam Unayes Ahmed. e. [3] utilizes the java exception handling mechanism.org. This makes Java convenient for implementing distributed applications. operand and pc. The system developed based on our framework needs no extra involvement of programming to continue the whole migration system. Khalad Hasan. In querying the JVM for running objects. S. Al-mamun Shohag. stepping.2009. Mashud Rana Dept.distributed computing. RELATED WORKS I. capture debugger. Then it adds a primitive migratory for those methods. Though the full concern was on reducing performance overhead but they lose the portability. His complete focus was on capturing and rebuilding the execution context of running objects and not on particular means of transportation. These events are placed in an ‘EventQueue’. At that time local variables are stored in state object. So we have to serialize/deserialize the thread.com. al.207 . strong thread migration. III. Bouchenak et. M. restoration debugger. The second limitation is code growth rate. Md. khalad-cse@sust. subeen@acm. In this project the JPDA (java platform debugging architecture) is used to capture and restore the execution state for dynamic thread migration. Source code is compiled into a transient state called bytecode (an instruction set that is very similar to the one of a hardware processor). Thread migration or thread serialization has many applications in the areas of persistence and mobility. That is we need to transmit/store the state of the execution. Their mechanism is based on a preprocessor that inserts code into the program to be migrated.edu.edu. from which the debugger may consume and further query the running program. method entry and method exit events. In Dynamic de-optimization technique it uses the user defined stack named type stack and type frame to restore the states. T. not in any other method. Debuggers are able to ‘hook into’ programs running in the JVM by requesting notification of events fired by the framework. The Java Virtual Machine then interprets the byte code into an executable form. INTRODUCTION In Java Programming Language the mobility is considered during design. Our concern is here where we can start the process from the point at which the process is transmitted. the debugger 205 978-0-7695-3521-0/09 $25. When any NotifyMigration exception is thrown it is caught in the current method and propagated to the calling method. In our framework neither we lose portability nor do we insert any artificial code.com Abstract—This paper provides a complete framework for thread migration using JPDA. The code growth rate is high due to the code insertion by Pre-processor.00 © 2009 IEEE DOI 10. A. Since the JDI is the highest level and easiest to use developers are encouraged to develop debugger using JDI. The Object Serialization allows java to be transmitted or stored which enables a process to be started at new hosts with an initial state that is always starts from the beginning of the process.1109/ICCET. dynamic reconfiguration of distributed systems. [1] modifies the JVM. In Type Inference they create user defined stack. In their framework they used two techniques named Type Inference and Dynamic deoptimization. This makes the developers’ work easy where a user can run and compile his code once and then can run in any machine with JVM. It first identifies the methods that are to be migrated. Md. dynamic load balancing and user’s traveling in mobile computing system. exception. shohag_2002@yahoo. sending agent. [2] developed provides a mechanism of strong thread migration. masudcse@yahoo.

There will be also a receiving agent that will receive the serialized data and then run the mobile object in listening mode that will be traced by another debugger in sender side named ‘RestorationDebugger’. In this project the design process concentrates on state capture and restoration of running objects along with the transportation of mobile objects.Java Debug Wire Protocol Front-End User Interface JDI . IV. JVM CHARACTERISTICS • The heap of the JVM includes all the java object created during the lifetime of JVM. The method area associated with a thread contains the classes used by the thread (classes where some methods are referenced by the thread’s stack). • Display of all capturing states for research purpose. • Build up an agent that will receive the byte code in receiver and then execute the byte code under the debugging mode. PROJECT GOALS Before you begin to format your paper. and limit use of hard returns to only one return at the end of a paragraph. Please take note of the following items when proofreading spelling and grammar: JVMDI . The capture and restoration is done using JPDA without inserting any code or modifying the JVM. Do not add any kind of pagination anywhere in the paper. as it is possible to both get and set local instance variables. We access java stack. Finally. V. Both execution and transmission are monitored by an agent named ‘SendingAgent’. The total project follows the following steps. The total capture and storage are done by developing a debugger using JDI named ‘CaptureDebugger’. The system consists of two agents named sendingagent and 206 . complete content and organizational editing before formatting. Keep your text and graphic files separate until after the text has been formatted and styled. The JPDA allows stopping the execution of any running objects and accessing the exact location of execution state and then store the location along with the local variables. VI. All references and values returned by the JVM are mirrors of the actual values.Java VM Debug Interface Virtual Machine Back-End Using JPDA and byte code modification this project will build up a complete framework for the dynamic thread migration. There exists a location() method for the retrieval of the execution point. • Successfully suspend the process and modify the byte code and restore all of the variables and resume the process. OVERALL DESIGN JDWP . • The method area of the JVM includes all the classes that have been loaded by the JVM. A frame includes a table with the local variables of the associated method. The heap associated with a thread consists of all the objects used by the thread (objects accessible from the thread’s java stack). first write and save the content as a separate text file. • The byte code of the class is transferred using TCP/IP protocol.is not able to obtain direct object references. A new frame is pushed onto the stack each time a java method is invoked and popped from the stack when the method returns. Do not use hard tabs. • The capturing of the execution state of a process using JPDA. RestorationDebugger stops the mobile object in remote machine and then restore the state and values of the local variables and starts the process. heap and method area using JPDA as it provides convenient way to access them.Java Debug Interface Figure 1: Java Platform Debugging Architecture The JVM data areas that are related to our project are described below: • A Java stack is associated with each thread in JVM. A frame also contains registers such as the program counter (pc). A notable exception is that of the current execution point within a stack frame. limiting the ability to use an object in the same manner as a conventional program. and an operand stack that contains the partial results (operands) of the method. and top of the stack. but in an effort to enforce security constraints within the Java environment there is no accompanying setLocation(). Do not number text heads-the template will do that for you. Then the mobile object is serialized along with the stored value of local variables. For the most part this poses few problems.

It is responsible for tracing the mobile object in remote machine. • CaptureDebugger’: It will stop the running process and will capture the execution state (pc). Sending Agent Sending agent will start its duty only when the migration decision will take place. Start 2. If more request(s) then goto step 4. • RestorationDebugger: It will be started by the sendingagent. If the mobile object is started then goto step 9. Exit B. 10. Send the class file of mobile object to target machine. Else goto step 2. Execute CapturDebugger. It will always keep itself in communication with receiver until the migration is complete. Start a thread named AcceptThread. if connection is closed then goto step 6 else goto step 4. values of local variables etc). It will dump all captured information in a stack list named stackframelist. The receivingagent is always in listening mode by running an accepthread which can receive the mobile object along with the variables. 6. check for the connection. Then the receivingagent will be responsible for running the received mobile object in suspend mode that will be restarted only when the second debugger named ‘RestorationDebugger’ will be connected. AcceptThread will first accept the server socket and then get the class name. The total migration process is maintained by sendingagent. It will then restore the captured information and then restart the mobile object. 8. Else goto step 12. Else goto step 7. Then Sendingagent execute the debugger ‘CaptureDebugger’ using the method exec() of Runtime class. This thread will continue its execution until the server socket becomes null. 5. 11. 4.receivingagent for sender and receiver respectively. Now it will wait for the notification that will come from receiver whether the mobile object has started or not in the receiver. Check for any request for migration. RestorationDebugger will restore the captured values and restart the process. Start 2. Then it will send a notification about the start of the mobile object to the sendingagent of sender. 12. open a server socket 3. Execute RestorationDebugger. Receiving Agent Receivingagent will start its function by opening a socket in a specific port number. It will also notify the sender about the start of execution of mobile object. 5. Algorithm: SendingAgent 1. If there is a request to migrate then goto step 4. Four major components that will be greatly responsible for making the migration success are: • Sendingagent: Responsible for executing ‘CaptureDebugger’ and migrating the mobile objects along with captured information(execution state. Create a file named class name and then execute the class using exec() method of untime class in suspended debugging mode. 3. size and class file sent by sendingagent of sender. 4. Setup Connection with target machine. Sendingagent is responsible for executing the ‘CaptureDebugger’ and user can take decision of migration using sendingagent. signatures and local variables. Then it will start a thread named AcceptThread. Check wheter the mobile object started in target machine. If it gets the notification then it will execute the second debugger named ‘RestorationDebugger’ A.Exit Figure 3: Algorithm for Receiving Agent 207 . • Receivingagent: It will be responsible for receiving the mobile object along with captured information and executing the mobile object in suspension mode. thread name. At first it will set up connection with receiving agent using stream socket connection. It will also responsible for setting up connection with the receiver and launching the second debugger named ‘CaptureDebugger’ that will trace the mobile objects in remote machine (receiveer). Check for further migration request. Algorithm: ReceivingAgent 1. Then using this socket it first read the class file of the mobile object and then send the class file along with its name and size. 6. 7. 9.

Restoration Debugger CaptureDebugger connects with debuggee running in listening mode at specific port number. Suspend the debuggee 9. method signature. And also it should be noted that the debugger will resume the process after changing the PC. Then it will resume the TargetVM. The frame will be dumped too. Algorithm: CaptureDebugger 1. Send a notification to the SendingAgent 7. Then the frame will be distinguished according the method signature and method name. Find main thread 11. Accept the server socket 3. Check whether the connection is open 4. local variables and its corresponding values. which will later see the light of working with multithread. Then from the frames the program counter (pc). Then it connects with debuggee using this connector. Then it will update the local variables using the StackFrameList which is obtained from capture debugger. Find specific connector 3. Then it finds out main thread. To capture information the debuggee must be passive instead of active. Attach debugger with debuggee 8. Start 2. It first finds out the connector type by matching the specification provided through argument with connection specification provided by VM. Exit Figure 5: Algorithm for AcceptThread D. Then EventHand will trace all of the method call events and will search for a specific method call that is inserted just after variable declaration in a method. It first finds out the connector type by matching the specification provided through argument with connection specification provided by VM. If the connection is open then goto step 13 Else goto step 5 5. Check the server socket if sever socket is null then goto step 8 else goto step 2 8. If the specified method is found then it will suspend the TargetVM. Exit Figure 4: Algorithm for CaptureDebugger RestorationDebugger connect with debuggee running in suspension mode at specific port number. Since CaptureDebugger acts as an individual process simply executed by sendingagent. Here it is not guaranteed that the process will be started from the exact desired location but it is ensured that all of the local variables are updated successfully. Stack contains all frames. After attaching with TargetVM it will first resume the TargetVM by default. Create a new class file with the same name and content of the class file received. Extract all threads in debuggee 10. as our first concern is with single thread. Get the class name. Start 2. so a handshaking is required to make process communication through which we can transfer the captured information to the sendingagent. In the mean time the EventHand thread will be started by this debugger. To capture information it extracts all threads that are in debuggee. size and class file 4. Capture all states 13. Then it connects with debuggee using this connector. Capture Debugger Algorithm: AcceptThread 1.C. Now it will search for the main thread and will also dump the main thread’s stack. It then dumps the stack from this thread. All information is stored using a user-defined stack named stackframelist. 208 . 5. which are popped recursively. Check whether the debuggee is present 6. That’s why the CaptureDebugger first passivate/suspend the debugge. Run the class file in specific port address 6. Extract all frames 12. method name. If the debuggee is present then goto step 7 Else goto step 13 7.

“Execution Context Migration within a Standard Java Virtual Machine Environment. Melbourne. “The JavaTM Virtual Machine Specification Second Edition. REFERENCES [1] [2] Sara Bouchenak and Daniel Hagimont. “Zero Overhead Thread Migration. Figure 6: Algorithm for Restoration Debugger Our proposed framework provides a new and different way for strong thread migration. M.net/. 2000. Suspend TargetVM 13. 1999. CONCLUSION Algorithm: RestorationDebugger 1. 1999. Check whether the debuggee is present 6. Update frame by StackFrameList 17. Though we might develop a faster mechanism for strong thread migration modifying the JVM. Andrew Dorman. hence not to sacrifice portability.3/docs/guide/jpda/jpda. 1999”. pages 211–226. H. Masuhara. Dahm. Search for the sentinel method 11. May 2002.” [3] [4] [5] [6] [7] [8] 209 . Find specific connector 3.” In Michael Oudshoorn. 0261.html.VII. If the method is found then goto step 12 Else goto step 10 12.com/j2se/1. E. Using JPDA we provide system independent feature in our program (as JPDA is part of standard Java). Australian Computer Society.Automatic Distribution of Java Applications. Resume TargetVM 9. Fuad and M. If the debuggee is present then goto step 7 Else goto step 18 7. Sekiguchi. R.” http://java. Exit. “Java Platform Debugger Architecture Overview. and A. volume 4.” 04/12/2002 2002. http://java.doc. M. pages 65 – 75. we didn’t do so as we want to use our system in heterogeneous environment. Start 2. http://bcel. Java I/O. “AdJava . “Byte Code Engineering Library. If the connection is open then goto step 18 Else goto step 5 5. Find main thread from all threads 15.sourceforge.sun. Harold. T. Attach debugger with TargetVM 8.” In Coordination Models and Languages. Twenty-Fifth Australian Computer Science Conference. Disconnect debugger 18. Get StackFrameList from SendingAgent 14.sun. O’Reilly. Sun Microsystems. Dump frame from stack 16. Start eventhand thread 10.” INRIA Technical Report No. Oudshoorn. 2001.html. Check whether the connection is open 4. Yonezawa. editor.com/docs/books/vmspec/2ndedition/html/VMSpecTO C. “A Simple Extension of Java Language for Controllable Transparent Migration and Its Portable Implementation. VIII. Australia. Sun Microsystems.

the proposed technique provides both harmonic elimination and power factor correction. Finally. Non-linear loads.2009.00 © 2009 IEEE DOI 10. Keywords: Harmonic distortion. Section 2 of this paper provides the fundamentals of SAF and the structure of the controller is discussed.2009 International Conference on Computer Engineering and Technology A DSP-based active power filter for three-phase power distribution systems Ping Wei. A DSP-based three-phase active power filtering solution is proposed in this paper. Fig. especially power electronics loads.1109/ICCET. neutral current and unbalancing of non-linear loads locally such that a.c. The system considered in this paper is shown in Fig. Nanchang University. and neutral current for balancing of load currents locally and causes balanced sinusoidal unity power-factor supply currents under all operating conditions. 2. Active power filter system The main objective of the SAF is to compensate harmonics. mains to feed harmonics. 1 shows the basic SAF scheme including a set of non-linear loads on a three-phase 1. zhixiong3090@163. digital signal processors (DSP) 978-0-7695-3521-0/09 $25. Shunt Active power filter. In Section 3.com choq521@163. mains supplies only unity power-factor sinusoidal balanced three-phase currents. Furthermore.cn Abstract This paper presents a new digital signal processor (DSP)-based control method for shunt active power filter (SAF) for three-phase power distribution systems. This paper also presents the application of DSP-based controllers for SAF for three-phase distribution systems. create phase displacement and harmonic currents in the main three-phase power distribution system both make the power factor of the system worse. Conventional rectifiers are harmonic polluters of the power distribution system. In recent years.140 210 . The proposed technique requires fewer current sensors than other solutions which require both the source current and the load current information. the DSP-based solution provides a flexible and cheaper method to control the SAF. simulation results verifying the concept are presented. Compared to conventional analog-based methods. the simulation and experimental result also shows that both controller techniques can reduce harmonics in three-phase electric systems drawn by nonlinear loads and reduce hardware components. Introduction There has been a continuous proliferation of nonlinear type of loads due to the intensive use of electronics control in all branches of industry as well as by general consumers. The proposed technique uses a fixed-point DSP to effectively eliminate system harmonics and it also provides reactive power compensation or power factor correction. The SAF draws the required currents from the a. reactive power.Houquan Chen Department of Information Engineering.c. Digital signal processors are being used in a variety of applications that require sophisticated control.com wp620125@yahoo. Zhixiong Zhan. DSP-based controller for active power filters has been proposed in some papers in which general purpose and floating-point DSPs are used. reactive power. 1.

Vc. and at the same time act as the low pass filter for the AC source current. Vf2 and Vf3 supplied by the inverter as a function of the capacitor voltage. The inductors Lf1. G3 and G5 are: ⎡Vf 1⎤ Vc ⎡−2 1 1 ⎤ ⎡G1⎤ ⎢ 1 −2 1 ⎥ ⎢G3⎥ ⎢Vf 2⎥ = ⎥⎢ ⎥ ⎢ ⎥ 6 ⎢ ⎢ 1 1 −2⎥ ⎢G5⎥ ⎢ ⎥ ⎣ ⎦⎣ ⎦ ⎣Vf 3⎦ (1) 2. Then the SAF must be controlled to produce the compensating currents if1. The VSI is connected in Parallel with the three-phase supply through three inductors Lf1.distribution system.2. The load may be either single phase. In this paper.2. In this paper. if2 and if3 are SAF currents and Vs1. the SAF consists of three single phase inverters. if2 and if3 following the reference currents i*f1. the SAF system consists of a three phase voltage inverter with current regulation. The VSI contains a three-phase isolated gate bipolar transistor (IGBT) with anti-paralleling diodes. Vs2 and Vs3 are the supply voltages. the capacitor. The maximum voltage which must be supported by controllable switches is the maximum dc bus voltage. and the state of the switches G1. an IGBT with anti-parallel diode is needed to implement each switch. Fig. The DC capacitor provides a constant DC voltage and the real power necessary to cover the losses of the system.2 System modeling The representation of a three-phase voltages and currents of the VSI in Fig. The voltage in the DC capacitor can be calculated from the SAF currents and switching function as follows: VC = 1 C ∫ [G i 1 f1 + G 3if 2 + G 5if 3] (5) The set point of the storing capacitor voltage must be greater than the peak value of the line 211 . The switches of SAF must support unipolar voltage and bipolar current. which is used to inject the compensating current into the power line. The DC side of the VSI is connected to a DC capacitor. two phase or three phase and non-linear in nature. Fig. Cdc that carries the input ripple current of the inverter and the main reactive energy storage element. Lf2 and Lf3. Lf2 and Lf3 perform the voltage boost operation in combination with Where. 2 are as follows: the voltages Vf1.The proposed shunt active power filter 2. i*f2 and i*f3 through the control circuit. we consider three single phase uncontrolled diode bridge rectifiers with resistive–capacitive loading as non-linear unbalanced loads. Then the SAF currents can be written as: Lf1 Lf 2 dif 1 = VS 1 − V f 1 dt dif 2 = VS 2 − V f 2 dt (2) (3) (4) Lf 3 dif 3 = VS 3 − V f 3 dt Where. This load draws a nonsinusoidal current from the utility. G1.1 an SAF and nonlinear loads considered in this paper The current which must be supported by each switch is the maximum inductor current. if1.1 Description of proposed filter As shown in Fig. The inverter conduction state is represented by these logics. G3 and G5 represent three logic variables of the three legs of the inverter.

Fig. SAF connected in parallel with nonlinear loads.5.The proposed control system 3. has been inserted before the rectifier. In this paper. Vs2. a smoothing inductor. Simulation results of proposed SAF using PI controller.5 shows the performance of the SAF system using PI controller. 212 .3 Capacitor voltage with PI controller The basic operation of this proposed control method is shown in Fig.4. Fig. The capacitor voltage superimposed to its reference is shown in Fig. Lr. 2. The estimation of the reference currents from the measured DC bus voltage is the basic idea behind the PI controller based operation of the SAF. (d) Capacitor voltage with its reference. Simulation results A number of simulation results with different operating conditions were developed. Fig. the three-phase controlled rectifier with resistive load has to be compensated by the SAF. In Fig. and the capacitor voltage follow its reference. SAF is connected in parallel with nonlinear load. i*f2. The capacitor voltage is compared with its reference value. 3. Then. The output of PI controller is multiplied by the mains voltage waveform Vs1. It is noticed that the supply current in phase with the supply voltage. i*s3.5(c) shows the compensation current.neutral mains voltage in order to be able to shape properly the mains currents. The three-phase compensating reference current of SAF (i*f1. Fig. Waveform of the source current without SAF is shown in Fig. 5(a).5 (d). it is prevented the inverter saturation even in correspondence of Fig. 5(b) shows the source current with SAF superimposed to the supply voltage. i*s2. Vs3 in order to obtain the supply reference currents i*s1. in order to maintain the energy stored in the capacitor constant. Also. (a) Source current without filter. V*c. Fig.3. (b) Source current and source voltage with filter. Then the supply reference currents are proportional to the mains voltages. 4. The simulation results in steady state operation are presented. we give several simulation results with uncontrolled rectifier at α=0°. The PI controller is applied to regulate the error between the capacitor voltage and its reference. rectifier commutations. (c) Compensating current. In order to limit maximum slope of the rectifier current. i*f3) are estimated using reference supply currents and sensed load currents.

Fig. Fujita H. Revision A. 1997 Texas Instruments-TMS320C24x DSP Controllers Reference Set. Control and gating signals for the switches of the active filter are generated on a TMS320F240 DSP. Experimental setup of proposed DSP controlled active filter. A three-phase PWM controlled shunt active filter was designed to inject harmonic currents with the same magnitude and opposite phase to that of the nonlinear load currents in order to cancel harmonics at the point of common coupling (PCC) with the utility. March 1997. Texas Instruments-Dead-Time Generation on the TMS320C24x. (b) Source current. It is clear from Fig.7. Fig. Application Report SPRA289. The XDS510PP is a portable scan path emulator capable of operating with the standard test interface on TI DSPs and Microcontrollers. Fig. (c) Compensation current. From these figures. Fig. The DSP is connected to a computer through a XDS510PP emulator. Akagi H. it is clear that the effectiveness of the proposed controller for active power filter. Conclusions performance of the active filter. 13(2):577–84. The Texas Instruments (TI) TMS320F240 processor is a fixed-point 16-bit processor. Application Report SPRA371. The focus of this paper is to present a novel DSP controlled active filter to cancel harmonics generated in three-phase industrial and commercial power systems. 7(a) shows the supply voltage and current without ASF. IEEE T Power Electr 1998. supply current and its reference. The feasibility of the approach was proven through the experimental results. 7(b) shows the supply phase voltage. 7 shows experimental waveforms for the load condition of uncontrolled rectifier. 6 shows the block diagram of the experimental setup. 213 References [1] H. Vol. Fig. System and Instruction Set).Akagi. Texas Instruments-Configuring PWM Outputs of TMS320F240 with Dead Band for different Power Devices. 7(c) shows the compensating current. operating at 20 million instructions per second (MIPS). [2] [3] [4] [5] .4. The control scheme using three independent hysteresis current controllers has been implemented. The unified power quality conditioner: the integration of series-and shunt-active filters. its reference and source voltage. 1 (CPU. IA 32 (6) (November=December 1996) 1312–1322. and has the capabilities of using advanced PWM motor control techniques for adjustable speed drives. 5.6. This portable emulator works of the computer parallel port. Fig. IEEE Trans. Experimental results of proposed SAF. (a) Source current and voltage without filter. New trends in active filters for power conditioning. 7(b) that the supply current is almost sin waveform and follows the supply voltage in its waveform shape with almost a unity displacement power factor. The operation and modeling of the SAF have been described. 1997. A laboratory prototype has been built to verify the Fig. DSP implementation and Experimental results A laboratory prototype of the active filter has been built to evaluate the performance of the proposed active filter and its implementation in the TMS320F240 DSP.

Jeong S. 45(5):722–9. S. A novel real-time detection method of active and reactive currents for single-phase active power filters. IEEE T Ind Electron 1997.A. K. 2. Guerrero. power factor correction. IEEE T Ind Electron 1998. IEEE T Power Electr 2000. IEE Proceedings 151 (3) (8 May. Al-Haddad K. Buso S. 2004 (PESC 04). 214 . Technical References. An improved control algorithm of shunt active filter for voltage regulation. Lindeke. Yasushi. Malesoni L. M. Mussa. Matas. 14(4):5–12. Power Electron. M. Miret. Y. harmonic elimination. Castilla. Singh B. Toshihiko. Eiji.1024–1027 Texas Instruments-TMS320C24x DSP Controllers Evaluation Modul. IEEE T Power Electr 1998. 13(1):160–8. Nishida. Norio. Control of a new active power filter using 3-d vector control. and balancing of nonlinear loads.M. 17–21 June. F. Electric Power Applic. Barbi. H. Rukonuzzman. I. IEEE T Power Electr 1999. DSP-based active power filter with predictive current control. O. Masayoshi.P. Platt D. Power Electronics Specialists Conference. IEEE Trans. Nakaoka. m20–25 June 2004. 2007. Gosbell V. 2007 (PESC 2007). IEEE. D. Garcia deVicuna. pp. Combined deadbeat control of a series parallel converter combination used as a universal power filter. 2004) 283–288. Dastfan A. de Souza. Single phase active power filter controlled with a digital signal processor – DSP. 44(3):329–36. Feedback linearization of a single-phase active power filter via sliding mode control. 23 (1) (2008) 116–125. Habetler T. Mattavelli P. Woo M. Power Electronics Specialists Conference. I. T. Chandra A. J. 15(6):495–503. 2004 IEEE 35th Annual vol. 2933–2938.[6] [7] [8] [9] [10] [11] [12] [13] [14] [15] Kamran F. 1997. Advanced current control implementation with robust deadbeat algorithm for shunt single-phase voltage-source type active power filters. pp. M. Comparison of current control techniques for active filter applications. J. L. J.

user represents the user collection of executing task. in the fig.). The widespread mainstream model is based on the RBAC role-based access control model. furthermore the execution time length of task2 is regulated not over 30ms.2009 International Conference on Computer Engineering and Technology Access Control Scheme for Workflow Gao Lijun Zhang Lu Xu Lei computer school Shenyang institute of Aeronautical Engineering Shenyang. The shortage of RBAC model in the workflow system RBAC. 2008. a new access control model is proposed with the introduction of task set. The equipment purchasing workflow consists of five tasks: filing an application for purchasing(task1). The task executing constraint of a real workflow instance is as follows: (1) is project manager pr executes task1. by emphasizing the weakness existing in the present workflow system. etc. it is inevitable that some personnel can execute illegal operation by the convenience of work. especially the shortage of control mechanism of workflow. If the system fails to provide sufficient security protection for these cooperative staff. task flow.com Abstract—Access control is an important security measure in workflow system.workflow. which has wide application in the access control field. is the basic framework of RBAC96[1] model. China e-mail: gaolijun0610@163. and the minimum permission constraint is the the minimum permission constraint of role[2]. scrutinizing purchase bill(task3). financial clerk cl executes task2. It overcomes the shortcomings of the traditional model based on the role access control that has bad dynamic adaptivity of authority assigning. order. A.1109/ICCET. and the most remarkable characteristic is to divide big task into smaller tasks which can be finished by many men cooperatively. 2008 to Oct. INTRODUCTION Workflow is an efficient method of complex multitask cooperative modeling with extensive application in the enterprises’ informationization. and the granularity partition of permission has bigger localization RBAC is a kind of access control model tending to be static.(3) The time constraint period is set from Jan. no flow control and time constraint. task flow. so it’s difficult to authorize and revoke the permission timely and rightly according to the need of transaction process dynamically. so it fails to control the interactive relations among tasks (e. B. and can’t satisfy all control demands if the existing RBAC model is introduced into workflow system straightly. confirming purchase bill(task4). The dynamic property of assignment and revocation of permission is bad. which determined the granularity of permission can be specified up to the role level. then by su.120 215 . recurrent time constraint. Shortage of task process control capability and the periodic constraint capability Take the case of a simplified firm equipment purchasing workflow in this section. it can not only forbid users executing unauthorized tasks but also ensure authorized users to execute tasks smoothly. tasks. making purchase bill(task2). executing purchase(task5). that main idea is modify the traditional two-layer authorization structure—user-permission into three-layer authorization structure—user-role-permission. Furthermore permission is relevant to the role directly in the traditional RBAC model. ma pr cl Figure 1 s Role Hierarchy Structure 978-0-7695-3521-0/09 $25. which can’t be related to the transaction process of application tightly.(2)the execution series of task is task1 task2 task3 task4 task5. Based on the analysis of many access control models. but if A has executed task1 then A must execute task4. .2009. task3 should activated by ma.g. and task2 can be executed from 9 o’clock to 16 o’clock on the 15th every moth. but has some weakness in the workflow area. etc. however the existing RBAC can’t express complicated workflow access control constraint. the figure 1 shows the role layer structure. general manager role ma and financial executive role su cooperate to execute task3. the number on the arc represents the user number needed in this task. therefore through relating user with permission to simplify the difficulty of authorization management. The model can therefore enhance the security and practicability of the workflow system. II. the faults of RBAC in the workflow system are pointed out.00 © 2009 IEEE DOI 10. put forward by Sandhu. therefore effective task control model is needed to manage and control the access of these cooperation staff. role c1 executes task5.and if user A has executed task1 or task2 then A is forbidden executing task3 and task5. RBAC model has its own advantage. time constraint I. Keywords. time constraint and the formalized definition and relevant algorithm. That’s to say. 2.

representing the affiliation between session and user. this fault is more obvious because the concept of task is very distinct. etc. on which adding the workflow control mechanism that make the current task can obtain the prescriptive permission only in accordance with the constraint conditions set by the model. representing the role’s task assigned.which can get the time length of task permitted to be activated. 3 illustrate. Definition 1 T-R&TBAC model structure U={u1. moreover specify the granulites of permission up to the task level. it must have the permission during the executing permission task instance period. permission is not directly related to the role.a many-to-many mapping from user setto role set. just like the following fig. and improve the dynamic adaptability of workflow system dramatically. TSUCC Ž T is the front ordered set of present active task.4]. not to say that the RBAC failed to recognize the task status which results to unable to trace the accomplishment of task and increases the difficulties of calculating the start time of follow-up task. Definition2 Defination of conflicting role and conflicting task set CR={cr1. this structure must be changed. task conflict constraint. Definition 5 T-R&TBAC system status function mapping from ti to real time. Definition 4 the time and status function TM={ti |i  N} TM is the set of all time point of visual world. task time constraint.…cth}. and make the task execution limited by time constraint so it’s easy to think that modify the original structure into four-level one. so it’s natural to realize the dynamic assignment and revotion of permission. task cooperative constraint. and assigns tasks to every role. UA Ž U u R . a many-to-many mapping from role set to task set. S={s1. and finally form a access control model with time constraint -----T-R&TBAC which based on the role and task. cri  R and make u  U . In order to solve the above-mentioned problem. so it’s natural to introduce the concept of task set into RBAC model clearly.s2…sq}is the set of all sessions. which is the task set must been finished before finishing this task. which is the task set must been finished after finishing this task. TP Ž T u P. In the workflow system. cr  CR :| role _ set (u ) ˆ cr |d 1 where the role_set is the function assigning role to user. which return current time. ct 2. task flow constraint. CT { ct 1.r2…rn} is the set of all roles[3.p2…pp} is the set of all access permission authorities. ct  CT :| task _ set (r ) ˆ ct |d 1 Figure 3 Simplified T-R&TBAC Model The model gives specific role to all users of the whole workflow system. No matter which user is given with any role to register . representing the task’s permission assigned. The following gives the formalization description of model and the related algorithm realizing the above-mentioned constraints. RH Ž R u R define the partial order on R. T-R&TBAC model inherits the characteristics of RBAC3[1]. cr2. representing the user’s role assigned.Figure 2 Purchasing workflow The task execution of workflow system is limited by some conditions. while having to be related through task. a many-to-many mapping from task set to permission set. TaskTimeLength T o N TaskTimeLength (taski)= n n  N is natural number(s .t2…to} is the set of all divided tasks. R={r1. III. but can set a Currenttime ‡ o TM is a global atomic sub function. SA Ž S u U. which called the role hierarchy.the confilicting task set cti  T and make r  R. RT Ž R u T. u2…um} is the set of all users. T={t1. task cooperation constraint. ti  TM represents one time point which should not be accordance with real time. 216 where task_set is the function assignning task to role. j  N .the conflicting role set. ti tj  TM i<j œ ti<tj TR={(ti tj)| ti tj  TM i<j} is the time zone between two time points. P={p1. The constraint conditions include time constraint. including the task order constraint.…crh}. TNEXT { …tnextp …tnextq }. Definition 3 Logic control structure of task TSUCC {…tsucc i …tsucc j }. In such a kind of access control structure. and i. . which are all hard to resolve through traditional RBAC. also the minimum access executing permission of every task. GetTaskTR T o TR GetTaskTR taski = tr tr  TR is the time zone of task permitted to activate. a function mapping from session set to user set. StartTime T o TM StartTime taski = ts ts  TM is the beginning time of task. Introducing the access control model of time constraint T-R&TBAC based on the role and task Time-Rose & Task Based Access Control The above section analyzes the faults of role access control model in the workflow system but the basic reason is the three-level access control structure which determined these congenital faults of RBAC. TNEXT Ž T is the back ordered task set of present active task. permission assignment constraint.

TaskStation} False ( unfinished TaskStation is the task accomplishment status. GetTaskPermission: T o P:GetTaskPermission taski =P’ P’ Ž P taskj  T obtain the limited permission of the task. GetTaskPermission taski . task flow. Sandhu. GetTaskStation T o {True GetTaskStation (task )= False i . Conclusion The work flow control mechanism is introduced into the T-R&TBAC model based on the RBAC model.R. Feinstein and Charles E. Jan H.D. time. P.RT. taskj  T delete the task from active task set. Xu Lei.Computers and Security. 1996. 1999: 33-42 [4] Ahn. On the increasing importance of constraints.33(9):15 217 .TSUCC. [6] Gao lijun. Constraints for role-based access control. The extended model overcomes many inherent shortcomings of traditional model.SA. Youman. PerAccessFlag={False ( True ( The resolution scheme and relevant algorithm of task flow control and periodic constraint Definition 7 global function definition GetRoleTask: R o T:GetRoleTask r =T’ T’ Ž T get the role’s task. Tnext Ž T is back ordered set obtaining the current active task. The reseach on the turning point choose and the security recovery algorithm in TRBAC [J]. DelTaskFrom: T o AT: DelTaskFrom (taski)= AT. 1999.Shanghai Computer Engineering. showing the constraints are not satisfied. Hal L. AddTaskTo:T o AT: AddTaskTo(taski)=AT.taskj  T add the task into active task set. USA: ACM.UA. MD: ACM Press.CR. Virginia. etc.The RCL2000 language for specifying role-based authorization constraints [Ph. Role-Based access control models[J]. To a real system. GetSuccTask T o Tsucc GetSuccTask taski Tsucc. DelTaskPermission T o P DelTaskPermission taski DP.filling the user of the task in the back ordered task assigned user area. the If with the above judgments of PerAccessFlag is still True. IEEE Computer.CT. DelTaskPermission taski . Coyne. which show that the task is feasible and obtain the permission. Sandhu. Botha. which makes the workflow constrained by time. 1996. Ravi S. further study should been made to solve the time efficiency and recovery of security status. Edward J. Fairfax. and the relevant resources access permission is also been deprived.  Constraint_valid:state o {True False Constraint_valid(state)={False ( True ( In order to discuss conveniently. inheritance.} } task. Definition 6 authorizing constraint judgement Authorizing constraint judgement is a one variable function. task cooperation. permission assignment. Thesis].} Else {BackWriteNextUser GetNextTask taski)). 39-46. enhancing the security greatly.-J. permission. [2] Reinhardt A.PT. this 16 variables set illustrates the status of the task at that occasion.T. Algorithm description: PerAccessFlag=True AddTaskTo(taski) GetTaskHeadStation(taski) {True ( ( False [1] Ravi S. Tsucc Ž T is front ordered set obtaining the current active task. combining with workflow system seamlessly. DP Ž P delete the permission of the task. {True ( finished While Currenttime/ ' T //rejudge the constraint conditions at ' T interval { If Constraint_valid(state ) task GetSuccTask ||GetTaskStation (taski))==false || (StartTime taski IN GetTaskTR (taski)) || (Currenttime StartTime<TaskTimeLength (taski)) PerAccessFlag = False If PerAccessFlag {DelTaskFrom (taski). task confliction.Access Control in Document-centric Workflow Systems An Agent-based Approach[J]. judging whether meeting the constraints of user. obtaining the accomplishment of current task. confliction. Eloff.StartTime. so the task will be deleted. we set a task executing flag variable. In: Proceedings of the ACM RBAC Workshop. indicating the back ordered task of this task should be accomplished by the user. GetNextTask T o Tnext GetNextTask taski Tnext.20(6):525-532 [3] Trent Jaeger.Currenttim e.G.TNEXT.P.2007. then access to relevant resources. Fairfax. AT is the active task set. etc.2001. AT Ž T.RH. if PerAccessFlag is False.29(2) 3847. role(including role cooperation). VA: George Mason University. BackWriteNextUser Tnext o HEADNODE BackWriteNextUser taski =HeadNode which can be empty . [5] Fang Chen.state={U. In: Proceedings of the Fourth ACM Workshop on Role-Based Access Control[C].

The interference between 802. 310000. the research result on the interference between Bluetooth and RFID not appears in the public literature. The inter-piconet interference within the Bluetooth network is illustrated in [10].11b and Bluetooth is widely researched and many antiinterference methods are proposed in the literature such 978-0-7695-3521-0/09 $25. Therefore. Furthermore. In recent years. INTRODUCTION Bluetooth chips are embedded into diverse products such as notebook computers. the performance of Bluetooth under the interference of RFID is investigated.BER. The theoretical and simulation result of PER. PER I. 100876. the simulation and theoretical results are compared to justify the proposed mathematical model.2009 International Conference on Computer Engineering and Technology A Mathematical Model of Interference between RFID and Bluetooth in Fading Channel Junjie Chen Beijing University of Posts and Telecommunications. PER of 802.e. The rest parts of this article are organized as follows.2 to research the coexistence problem of WLAN and WPAN operating on 2. Bluetooth uses frequency hopping spread spectrum (FHSS) and hops on 79 channels in 2. in section III.. Although many works related to Bluetooth are completed. Bluetooth. IEEE specifically established Working Group 802. RFID devices emitting on a relatively large power (up to 4000 mW EIRP) bring significant interferences and interrupt the data exchange in Bluetooth piconet. the shopping mall and the hospital) where many people use Bluetooth-enabled devices. The proposed model carefully takes PHY (i. China. the packet format and the traffic load) into account. Beijing. Given the required parameters such as the distance and the frequency. the performance of Bluetooth under the interference of RFID is worthwhile to be evaluated to pave the way for developing coexistence algorithms. China. a mathematical model is established to quantify the performance of Bluetooth in the presence of RFID’s interference. In final.. the distance and the modulation) and MAC (i. The mutual interference between Bluetooth and Zigbee is presented in [9]. Keywords-RFID. digital cameras and headsets to facilitate the interdevice data exchanging. .e. The following 2. Abstract—In this paper. in this paper.1109/ICCET. II. Therefore. the academia and industry paid much attention to RFID technology and it will be deployed in the place (such as the campus. Packet Error Rate (PER). the channel model. the interference in PHY is analyzed and Bit Error Rate (BER) of Bluetooth under the interference is obtained.193 218 as [6]-[8].00 © 2009 IEEE DOI 10. The interference related to Bluetooth or RFID is extensively investigated in some previous works. the performance metrics (i.com Jianqiu Zeng Beijing University of Posts and Telecommunications. Based on the BER and the collision time. China.. In final section. Beijing. In section II. Since the transmitted power of Bluetooth is very limited (1 mW in general). the average delay and the throughput are selected as the performance metrics and are analyzed in this paper. the average delay and the throughput) are formulated. Yuchen Zhou Hangzhou branch of China Telecom. Interference.4 GHz ISM band. Then. In [1]. Chenjunjie78@gmail. the frequency hopping. 100876.4 GHz indoor path loss model is used for Bluetooth device [3]. cellular phones. Packet Error Rate (PER).2009.15.11b under the interference of RFID is analyzed. the theoretical analysis is justified by the simulation. in section IV. the average delay and the throughput are compared. to the author’s knowledge. PER. the path loss can be figured out easily by the path loss model.4 GHz band. the average delay and the throughput are selected as the performance metrics. the MAC sublayer is taken into account and the collision time is figured out. the transmitted power. PHY LAYER INTERFERENCE Path loss is defined as the difference of the signal strength in the unit of decibels between the transmitter and the receiver. Hangzhou. A mathematical model is proposed to quantify the performance degradation of Bluetooth.e.

(13) and ρ in Eq. the path loss model of 2.3). solving the Bessel and Marcum Q function. Bluetooth When the distance between the RFID device and the victim receiver is given.2+20log10 (d). 0. ⎧40.b) − 0. Similarly. The model is not effective below 0. the signal to interference and noise ratio (SINR). RFID The signal-to-interference ratio (SIR) is the ratio of the received power and the interference power. and f in the unit of Hertz is the operating frequency. And the correlation coefficient ρ in above equations is given as: sin(2πβ ) (11) ρ= 2πβ where β is the modulation index. respectively. Bluetooth = ⎨ ⎩58.5 ⋅ I0(ab) ⋅ exp( −(a 2 + b2 ) 2 ) where Iβ(x) is β order modified Bessel function of the first kind and Q(a. which is used to calculate BER in general.5 ≤ d ≤ 8 m (1) Lp. x ≥0 I β ( x) = ∑ k = 0 k !Γ( β + k + 1) where the Gamma function is defined as follows. b) = Q1 (a. obtained: a = γ (1 − 1 − ρ ). From Eq.5 meter due to near-field effects.35. respectively.28 and a maximum modulation index of 0. therefore the BER is: (6) Pb = Q(a. Hence. BER as the function of Eb/No is obtained. The transmitted power of Bluetooth is denoted by Pt in the unit of dBm and the received power of Bluetooth is denoted by Pr dBm. It is known that the received power is the difference of the transmitted power and the path loss: (3) Pr = Pt − Lp. p >0 (8) 10 −2 The Marcum Q function is: 10 −3 10 −4 −2 0 2 4 SIR (dB) 6 8 10 219 . Eb Eb × B Eb × (2 / Tb) 2Pr (12) = = = = 2SIR = 2γ N0 N0× B Pi Pi Using SIR to replace Eb/No in Eq. (1) to (5).5+33log10(d/8).5. according to the parameter a and b is Eq. Since the noise power is very weak relative to the interference power. the path loss of RFID can be obtained by its own path loss model. Eb/No in previous equations can be converted to SIR as the following formula. BER of Bluetooth under the interference of RFID can be figured out.15. RFID = −147.b) = ∫ x ⋅ exp ⎡ −( x 2 + a 2 ) 2 ⎤ ⋅ I 0(ax)dx ⎣ ⎦ b β ∞ (9) ∞ a = exp(−(a 2 + b 2 ) / 2) × ∑ ( ) I β (ab) (b > a > 0 ) β =0 b Besides.45 GHz RFID is given by: Q(a. (2). b = (1 + 1 − ρ ) 2No 2No (10) where Eb/No is the ratio of energy per bit to power spectral density (PSD) of the noise (or the interference). BER of Bluetooth as the function of SIR is illustrated in Fig. therefore.BER Lp. the path loss of RFID signal can be calculated by Eq. 10 0 Bluetooth BER vs.6 + 20log(d)+ 20log(f) − 10log(GTGR) (2) where d meters is the distance between the RFID interferer and the Bluetooth receiver. (10).1 standard [2] specifies a minimum modulation index of 0. (14) Pb = f (γ ) where SIR can be obtained easily by Eq. The Bessel function is: ∞ ( x / 2) β + 2 k (7) . the noise can be neglected in this case. 2 b = γ (1 + 1 − ρ ) 2 (13) So now. a and b is defined as: Eb Eb 2 2 a= (1 − 1 − ρ ). therefore the interference power Pi in Bluetooth receiver is: (4) Pi = Pti − Lp. GFSK modulation of Bluetooth has a bandwidth time (BT) of 0. approximates to the SIR and is given by: (5) γ = SINR ≈ SIR = Pr − Pi Bluetooth modulator uses Gaussian Frequency Shift Keying (GFSK) and the envelop detection. in which the solid and dotted line stand for BER with the minimum and maximum modulation index. and it is the function of SIR as follows. (6) to (11). Suppose the transmitted power of RFID is Pti dBm. GT and GR are antenna gains of the RFID interferer and Bluetooth receiver.35 10 −1 Γ( p ) ∫ ∞ 0 t p −1e − t dt . d > 8 m where d in the unit of meter denotes the distance between the transmitter and the receiver of Bluetooth. in above equations. Path loss follows the free-space propagation (path loss exponent is 2) up to 8 meters and then attenuates more rapidly (path loss exponent is 3.28 Index=0. 1. but BER as the function of SIR is required for the interference analysis. b) is Marcum Q function. Both are specified in the following formulas. and IEEE 802.SIR Index=0. According to [5]. (11).

The received power of the Rayleigh fading follows the exponential distribution. To simplify the analysis. It is a continuous variable and is uniformly distributed between 0 and LR (the average inter-arrival time of two consecutive RFID transmissions).. the TB is 366 us long. λ ∈ [0.. ⎪TR − X .. 3 or 5 slots. in the real world. if the traffic load of Bluetooth is λ . Bluetooth and RFID are asynchronous due to its different PHY and MAC technologies. k = 0.45 GHz RFID operate at the data rate of up to 40 Kbps. The slot time is denoted by TS. 2) and the transmission time (denoted by TR in Fig. the time domain is divided into 625-us-long slots by Bluetooth system and each packet can occupies 1. To simplify the analysis. LR ) In the downlink. (14). However. considering the effect of the multipath. therefore SIR is also exponentially distributed. the received power based on path loss and shadowing alone. TR − TB < X ≤ LR − T B ⎪ TC = ⎨ TB − Tidle. the received signal has Rayleigh distributed amplitude in the fading channel. the RFID packet is definitely longer than the Bluetooth one. the traffic load of Bluetooth is exactly the probability of packet occurrence in one slot.. Although BER is drastically changing. the one slot packet is used.e. TR < X ≤ LR ⎩ (20) λ Figure 2. and only the header of each RFID packet will occupy the time period greater than one time slot of Bluetooth. The Frequency Domain Collision Bluetooth system occupies from 2. the 2. Bluetooth’s BER versus SIR The BER obtained in Eq. ⎪TR − X . the mean of BER can be formulated as follows. For RFID.4 to 2. (14) is the result in the AWGN channel. therefore. [1] TB ≤TR & TB ≤Tidle 0 ≤ X ≤ TR − TB ⎧TB . the RFID command packet in forward link and the response packet in reverse link have an equivalent interference to the Bluetooth packet.4835 GHz. LR − TB < X ≤ TR ⎪ ⎪TB + X − LR. As Fig. Hence. and the x is the instantaneous received power. According to the theory of probability. The time domain packet collision model B. the idle time (denoted by Tidle in Fig. The Time Domain Collision When Bluetooth piconet collocates with RFID devices. i. the average BER of Bluetooth can be determined. It is noted that the RFID reader still sends out an unmodulated carrier to power up the passive tag when the tag transmit the response packet in the uplink. In this model. the BER is the function of the SIR. LR − TB ≤ X ≤ LR ⎩ [2] TB ≤TR & TB >Tidle 0 ≤ X ≤ TR − TB ⎧TB..1] Since Bluetooth transmits packets in the slot. TR − TB < X < TR (19) ⎪ TC = ⎨ TR ≤ X < LR − TB ⎪0. The operating frequency is: (21) f = 2402+ k MHz. 1 (17) Tidle = ( − 1)Tbusy. According to the standard. 2 is the collision model of RFID and Bluetooth packet in time domain. TC. the traffic load (denoted by λ) is taken into account. in general. As illustrated by Eq. the Bluetooth packet may overlap with the RFID packets in time domain. can be calculated as the following equations. Since the SIR is randomly changing. The packets of them have a time offset which is a random variable and is denoted by X in Fig.Figure 1. 1 (15) p ( x ) = e − x / Pr Pr where Pr is the average received power of the signal. but the Bluetooth packet only occupies TB of the slot time to transmit. then the probability of packet occurrence in one slot is exactly λ. RFID also adopts FHSS to avoid the inband 220 . That is. 2. MAC SUBLAYER INTERFERENCE A. the BER is also a random variable.b) − I0(ab) ⋅ e 2 )e Pr dx Pr −∞ 2 (16) III. a guard band is used at the lower and the upper band edge. Hence. 2) has the following relation. The collision time. Fig. ⎪TB + X − LR. Bluetooth system adopting FHSS can hop on the 79 channels uniformly.78 Each RF channel is 1 MHz wide and 79 channels are available. The collision time is defined as the time interval in Bluetooth packet which is overlapped by the RFID packet and is the time duration of the interference. Pb = ∫ +∞ −∞ Pb ( x / Pi ) p ( x)dx 2 2 − ( a +b ) x − 1 +∞ 1 = ∫ (Q(a. RFID reader sends out a modulated carrier to power up the tags as well as to carry the command message. 2 illustrated. but in order to comply with out-of-band regulations in each country. the interfering power of RFID is assumed to be constant. the probability of each channel being occupied is Pf =1/ 79. (18) X ∼ U (0.

we assume that the power of Bluetooth and RFID is uniformly distributed in the occupied channel.1] (23) ⎪ S=⎨ ⎪ 2 − X. Even if the errors are uncorrected. Packet Error Rate The collision time TC is the time interval of one Bluetooth packet overlapped with the interfering RFID packet. and m is the channel number ranged from 0 to 99.. m = 0.99 (22) where fCH is the frequency spacing. (0 < Pp < 1) 1 − Pp n =1 (31) Under the assumption that one master and only one slave are present at the network. Assumed that the number of retransmission is unlimited. the average collision time without the effect of the time offset is given by the following integral. and Tb is the bit duration of Bluetooth. S = ∫ XdX + ∫ (2 − X)dX = 1 0 1 1 2 into account both the time domain and the frequency domain is the product of the collision time and the probability of frequency collision. this packet will be retransmitted when next transmitting turn of the node. if multiple slaves are present in the piconet. 1 LR (27) T ′ = E[T ] = T ( X )dX LR ∫ 0 The PER can be derived from the BER and the average collision time. each of which has a bandwidth of 1 MHz.. By deconditioning with the random variable. The retransmission time should be the multiple of the slot time.1. if one data packet is detected to be error. And the symbol S denotes the ratio of the shadowing area and the power of the whole signal. The frequency domain collision model Since the channels of RFID and Bluetooth are unaligned. which is analyzed in section III. the variable X). X ∈ (1. For voice packet of the Bluetooth. the actual collision time taken 221 .e. X ∈ (0. To simplify the analysis. The operating frequencies of the RFID are: fC = (2931 + m) × fCH . The shadowing area in the figure is the area which the two channels are overlapped with. The probability of one successful packet transmission after (k-1) retransmission is: (30) P (k ) = (1 − Pp) Pp k −1 Since the required times of packet transmission is Geometric distributed.. ⎪ X. each channel of RFID is partially overlapped with its adjacent channels. The Average Delay and the Throughput The master and the slave of Bluetooth use Time Division Multiplexing (TDD) to realize the bidirectional link. the voice packet cannot be retransmitted since the voice is delaysensitive and can tolerate some bit errors. Therefore. therefore the retransmission is following the Geometric distribution.. Suppose the BER with and without the interference is denoted by Pb and Pb0. if the packet of RFID and Bluetooth are transmitted by different frequencies. X >2 ⎩ The mean power of one signal is overlapped can be figured out as follows.. Since the Bluetooth is transmitting according the time slots.. PERFORMANCE OF BLUETOOTH UNDER THE INTERFERENCE A. Time Division Multiple Access (TDMA) is adopted to enable the communications between one-to-multiple nodes..8192 MHz. (29) τ = iTS.2] ⎪0.. Forward Error Correct (FEC) is used to combat the bit error in the packet. which is the function of the time offset (i. So the actual collision time is: (26) T = TC × PC = TC /79 The actual collision time T is a random variable.interference and can hop on 100 channels. fCH = 0.2. the master and the (24) Hence. The retransmissions are required when the data packet is detected to contain some errors in the receiver. Since the time offset is uniformly distributed on 0 to LR as Eq. 3. the probability of frequency collision of Bluetooth and RFID is the product of the mean overlapping area and the probability of the Bluetooth channel occupied. the average transmission times for a successful packet transmission is: ∞ 1 m = ∑ (1 − Pp ) Pp n −1 × n = . It can be calculated by: (25) PC = Pf × S = 1/79 IV. (18) illustrated. the inband interference is negligible. However. the offset of two different channels is a random variable that denoted by X in Fig. i = 1.. At the same time. Figure 3. The packet error rate (PER) is: Pp = PER = 1 − (1 − Pb)T ′ / Tb (1 − Pb0)(TB −T ′) / Tb (28) B. X <0 ⎧0. The S can be easily obtained according to the offset X as the following formula illustrated. respectively. Since the frequency spacing is less than the bandwidth.

August 2007. (29) is the double of the average transmission times m in above formula. pp. 2006. Klaus Finkenzeller. “Interference in the 2. Therefore. 1240-1246. Finland. Nada Golmie and Frederic Mouveaux. The delayed slot number i in Eq.” August 2003. τ V.5 3 3. vol. “Packet Error Rate Analysis of Zigbee under WLAN and Bluetooth Interference. A. R.1: Wireless Medium Access Control (MAC) and Physical Layer (PHY) Specifications for Wireless Personal Area Networks (WPANs).15. The theoretical result of the mathematical model is also obtained and is shown in Fig. vol. 2003. N.5 2 2.15.45 GHz. [7] The simulation is done and the result is illustrated in figure 4. “Part 15. vol. Tonnerre and O. “Interference of Bluetooth and IEEE 802.5 5 Figure 4. Rome. “Interference Evaluation of Bluetooth and IEEE 802.45 GHz RFID system. CONCLUSION In this paper. R. Part 4: Parameters for air interface communications at 2.” in Proceedings of IEEE ICC’01. Rebala.E. the packet format) are taken into account.4 GHz ISM Band: Impact on the Bluetooth Access Control Performance.11b Systems. The factors that can impact the interference include PHY (the transmitted power. “Interference Modeling and Performance of Bluetooth MAC Protocol. 2 (32) i = 2m = 1 − Pp Hence. 2nd ed. A. the symbol Spayload is the payload length of each packet. 802. In below formula. Helsinki. The PER.4 0. “Radio frequency identification for item management. pp 201-210. Agrawal. June 2001.” in Proceedings of the Fourth ACM International Workshop on Modeling. the average delay time of Bluetooth data packet is: 2TS (33) τ= 1 − Pp Another performance metric. Van Dyck. SIMULATION COMPARED WITH THE THEORETICAL RESULT Comparison of theoretical result and simulation 1 Theoretical 0. Analysis. 2003. the distance between the RFID interferer and the Bluetooth receiver is changed from 1 meter to 5 meters. Soltanian. RFID Handbook. a mathematical model is proposed to quantify the performance degradation of Bluetooth system under the interference of 2.2. IEEE Std. “Part 15. and A. and Simulation of Wireless and Mobile Systems. Soltanian. 4 illustrated. VII. The PER is inversely proportional to the interference distance. λ ⋅ Spayload (34) ρ= VI.” IEEE Transactions on Wireless Communications. pp. the average delay and the throughput of Bluetooth are selected as the performance metrics.” IEEE Transactions on Wireless Communications. 802..2: Coexistence of Wireless Personal Area Networks with Other Wireless Devices Operating in Unlicensed Frequency Bands. Golmie. As Fig. Glomie.” Wireless Networks. the simulation result is very close to the theoretical result.5 4 Interference distance (meter) 4. July 2001. 2825-2830.” Sept. November 2003.” 2005. The anti-interference algorithms can be designed based on the analysis of this paper. 4 too.8 PER of Bluetooth [2] Simulation [3] 0.E. Van Dyck.1. N. [8] [9] 222 . can be calculated easily from the average delay as follows. the distance. In the simulation. 6. the throughput.slave use TDD to realize the duplex. Soo Young Shin and Hong Seong Park. the proposed mathematical model is verified. 2. The result of this paper can provide the criteria for coexisting between RFID and Bluetooth. the modulation scheme and the channel model) and MAC (the frequency hopping pattern. ISO/IEC 18000-4. REFERENCES [1] IEEE Std.2 [6] 0 1 1.6 [4] [5] 0. John Wiley & Sons. Italy. MSWIM’01. Carlos de Morais Cordeiro and Dharma P. Simulation results are compared with the theoretical results. 9.11: Simulation modeling and performance evaluation.

These advantages are important. those three components should run on different processing units in order to obtain optimum performance. the software analysis result will be presented and discussed.uni-bremen. METHOD A. It can be concluded that using a computer monitor as the stimulator. usually nonelectrical signals from the surface of the scalp (EEG activity). the optimization strategy described here led to a stable and reliable system that performed effectively across most subjects without requiring extensive expert help or expensive hardware. There are two main parts that need to be optimized: the flickering animation and the spelling system. Indonesia e-mail: indi@petra. Software Architecture There are three components required for running an SSVEP-based BCI system in a spelling program application as depicted below: I. In this application.00 © 2009 IEEE DOI 10. In an SSVEP-based BCI. e. We tested our program on several computers for the following parameters: frequency range. and the application (spelling program) in the same display/screen makes it easier for the subject to concentrate and it also simplify the system configuration. Ideally.g. and application (spelling program). no matter which software technology is applied (DirectX or OpenGL). there are at least three components for implementing a complete BCI application: stimulator. Brain-Computer Interface. ag@iat. a certain stimulator is required. the spelling program also provides mental feedback for the BCI user. In the spelling program application. laptops. signal processing. frequencies of the evoked potentials match the frequencies of stimuli. Axel Gräser Institute for Automation University of Bremen Bremen.ac. The maximum synthesizable frequency of up to 30 Hz with frequency resolution 0. e. Mean accuracy for the spelling system is 92.e. At least. signal processing. i. spelling program elicited using flickering light [3]. EEG. But integrating those three components in one computer system also gives advantages. We have tested our system on 106 subjects during CeBIT 2008 in Hannover. . Ideally. In addition. for implementing a complete BCI application including stimulator. Therefore.de Abstract— This paper describes an optimization strategy for steady state visual evoked potential (SSVEP)-based braincomputer interface (BCI). steady state visual evoked potential (SSVEP)-based BCI. Three components required for running SSVEP-based BCI system in a spelling program application: a stimulator. those three components should run on different processing units in order to obtain optimum performance. At the beginning. The optimization of spelling system will be focused on the layout and representation of the letter matrix. and frequency stability. a signal processing unit. INTRODUCTION Brain-computer interface (BCI) systems are designed to enable humans to send commands to a computer or machine without using the normal output pathways of the brain [1]. Then. [2]. Germany e-mail: allison@iat. the DSP algorithm translates those frequency responses into commands for controlling cursor movement and character/letter selection.unibremen. SSVEP. The problem arises when using general purpose computers.189 223 Figure 1. the maximum synthesizable stimulator frequency is always half of its minimum refresh-rate. providing the stimulator. are classified and then translated into commands using certain digital signal processing (DSP) algorithms. II.de.2009.id Brendan Allison.11Hz is achieved. Germany. This paper will be presented in the following outline. and the spelling program.g. To enable a signal processing system recognizing specific features of the brain signals in the cue-based BCI system. When optimizing the flickering animation. But integrating those three components in one computer system also gives advantages: make it easier for the subject to concentrate and simplifies the system configuration. especially in a spelling program application.1109/ICCET.2009 International Conference on Computer Engineering and Technology Optimization Strategy for SSVEP-Based BCI in Spelling Program Application Indar Sugiarto Department of Electrical Engineering Petra Christian University Surabaya.5%. software architecture will be presented and followed by its optimization strategies. which are usually 978-0-7695-3521-0/09 $25. frequency resolution. Signals from the brain. because end users need a BCI that does not require elaborate hardware (such as customized LED boards or separate computing systems) or expert help (such as to find working SSVEP frequencies or adapt the system to each user). and application (spelling program). flickering light. we will focus on the display driver technology and programming aspects.

left. since we only utilize four possible movement (up. it is reasonable to put vowels’ letter such as A. down. U. and right). or right according to the command interpreted by the Signal Processing Program from user’s EEG signals. Both programs are written in C++. any BCI also requires a fourth component. B. which is called the Signal Processing Program. Based on such probability analysis. Optimization Strategies In the Display Program. • The fact that many words composed mainly by vowels. The spelling system works as follows: • We provide collection of letters arranged in a matrix and a moveable cursor within this matrix. DOWN. The second program. feature extraction and classification. the letter at the cursor position will be selected and displayed on the appropriate position. In this way. For this purpose. The following diagram shows this architecture. The following figure shows the optimization result from the aforementioned approaches. And if the Signal Processing Program is able to detect this intention. we create five flickering boxes on the monitor screen with label: UP. there are two main parts that need to be optimized: the flickering animation and the spelling system. O around the base of the cursor position. To integrate those three components in a single computer. word/phrase. From our previous work [6]. Three components required for running SSVEP-based BCI system in a spelling program application: a stimulator. we can construct a letter matrix in an irregular but efficient way. the spelling program also provides mental feedback for the BCI user. Figure 2. is responsible for signal acquisition. which is called the Display Program. we created two programs working together and connected via TCP/IP socket. RIGHT. One simple way is the square matrix with ascending order as shown below: Figure 3. I. LEFT. In addition. • Since a word formation is achieved first by moving the cursor around. We can arrange the letters in many forms and orders. In addition. The above configuration can be optimized in the following way. In this work. and command generation.Of course. One simple way to organize letters is in a square matrix. it is better if the distance of a letter from the center is kept as near as possible. he/she must concentrate his/her gaze on the corresponding stimulus. Characters are arranged in a rhombus layout in order to achieve higher efficiency for cursor movement. is responsible for displaying flickering animation as the stimulator and also displaying the letter matrix for the spelling program as well as the visual feedback for the user. we used spatial filter called Minimum Energy Combination described in [4] as the core for the signal processing unit. it is better if the home position of the cursor is located at the center of the matrix. namely sensors to acquire the brain signal [1]. Note that this result can be expanded with additional characters if required. and the spelling program. and in this paper we focus on optimization for stimulator and the spelling program. The next part that needs to be optimized is the stimulator itself. the collection of selected letters will form a Figure 4. Commands generated by the second program will be sent to the first program through a network connection. down. E. Detail analysis using letter probability of words done by Thorsten Lüth [5] shows that letter E is the most commonly used letter. it is revealed that 224 . left. • When the user wants to select a letter to form a word. The first program. a signal processing unit. The cursor position is indicated by red-shaded cell. This cursor can be moved up. but these are not modified in this work and are not mentioned further here. and SELECT.

55Hz above or below the expected frequency. our program performance is improved significantly because those technologies will maximize the utilization of the graphics card. we add CPU’s mask-affinity to this process in order to utilize the first CPU’s core and give the second core to the Signal Processing Program. This value is calculated in the following way: Frequency shift = (2) We then increased the process’ priority and it yielded standard deviation interval of about 0.animations with plain texture will elicit better SSVEP response than animations with checkerboard texture. The timer intervals are calculated as follows: Interval = 1/(2*fLed) (1) where fLed is the flickering frequency of the animation • All of those threads are synchronized in one process and we set the process with high priority above all other Windows processes. frequency resolution.46ms.55Hz to 0. In order to achieve robust and high resolution flickering frequencies. we developed our program with the following approaches: • Each of five flickering boxes has its own thread and corresponding timer. and frequency stability. In order to produce flickering animation at 17Hz. Using these technologies. That is why we use black-and-white animation with plain texture for the stimulator. and many important neuro-psychological parameters. [9]. we use high resolution timer called Multimedia Timer which has resolution down to 1ms. • When developing program with OpenGL approach. no matter which software technology is applied (DirectX or OpenGL). we found that for timer interval of 29ms. We then optimized QGraphicsView class with the following parameters: o ScrollBarPolicy : ScrollBarAlwaysOff o Interactive : false (no interaction with other GUI events) o CacheMode : CacheBackground o OptimizationFlags : DontClipPainter and DontAdjustForAntialiasing Here is the screenshot of our SSVEP-based BCI for spelling application. ITR. Since we run the program in Windows platform. we used QGraphics Framework from QT [7]. • Two graphics display technologies are utilized and compared: DirectX 9. We also tested our program using subjects during CeBIT 2008 in Hannover and collected useful information such as spelling speed. the maximum synthesizable stimulator frequency is always half of its minimum refresh-rate (for CRT monitor) or response-time (for LCD-TFT monitor). the standard deviation value is just 0. we conducted experiments with various CPU rating and by varying the animation size and number of animation objects Figure 5. Comparison between OpenGL and DirectX method for displaying flickering animations on the computer screen. To measure frequency resolution. III. In addition. Integrated SSVEP-based BCI for spelling application. It can be concluded that using a computer monitor as the stimulator. first we have to measure timer accuracy during run time. The following graph shows the comparison between DirectX and OpenGL approach. we have to set the timer interval of 1/(2*17) = 29.4ms = 29ms (integer value).1ms. It means one may expect that the flickering frequency produced by using this Multimedia Timer may be shifted about 0. accuracy. RESULT AND DISCUSSION We tested our program on several computers for the following parameters: frequency range.0.0 and OpenGL 2.11Hz. according to equation (1). which means that we got the increasing frequency resolution from 0. 225 . The maximum synthesizable frequency of up to 30 Hz is adequate since the optimum flickering frequency for lowfrequency SSVEP-based BCI is around 15 Hz [8]. Figure 6. To measure flickering stability against variation of animation size and number of animation objects. Using Windows Multimedia Timer and executing the program as a normal priority process.

B. Gräser. Sugiarto.852 bits/minute. Vol.). 3)Three non-textured objects. J. June 2006. Srinivasan. and Axel Gräser. April 2008. and try to further improve SSVEP BCI performance while minimizing the needs for expert help. A. G.13 sec. Hong. 2008: 399-408. I. It seems that the software performance. the subject was asked to spell five words: ‘BCI’. “Creating Cross-Platform Visualization UIs with Qt and OpenGL.6.) and ‘Siren’ (111. According to Windows Experience Index of Windows Vista. Ivan Volosyak. “A Practical VEPBased Brain-Computer Interface.M. “Tools zur Verbesserung der Leistung von BrainComputer Interfaces. 2.” Diplomarbeit.” Computer-Human Interaction 2008. It can be seen that the highest performance is achieved by HP Compaq nx9420 while the lowest performance belongs to Acer Aspire 5510. Mean accuracy for all five words was 92. B. and T. Vaughan. “High frequency SSVEPs for BCI applications. Ola Friman. Gao. 2002. Wang. No. Pfurtscheller. HP Compaq nx9420 has the value of 5. Gerwin Schalk. M. G.Oxford University Press. Gary Garcia. Wang. 2007. April 2008. and the resulting performance of the system has been evaluated." Clinical Neurophysiology 119.R. 14. Birbaumer. R. external factors such as light reflection and interference will also affect the overall performance of the spelling system using SSVEP-Based BCI. and one free spelling. external LEDs. Allison. is greatly influenced by the computer performance. 54.. “Display Optimization in SSVEP BCIs. July 2006. 742-751.). Wolpaw. ‘Chug’.R. Thorsten Lüth." Clinical Neurophysiology 113. To average subjects needed two minutes to spell ‘BCI’ (122. ‘Brain Computer Interface’. 4.G. D." IEEE Transactions on Biomedical Engineering. Sperling and R. CONCLUSION [2] [3] [4] [5] [6] [7] [8] [9] The optimization strategy for the Bremen SSVEP-Based BCI in a spelling program application has been presented 226 . Inc. Italy. ‘Chug’ (139. 4)Six checkerboard-textured objects. Acer Extensa 5620 has the value of 4.J.72%. “Attentional Modulation of SSVEP Power Depends on The Network Tagged by the Flicker Frequency. McFarland. Graimann. The following graph shows the result from this experiment.) and eight minutes to spell ‘Brain Computer Interface’ (475. or expensive hardware. Jackson.06 sec. Italy.M. "Brain-Computer Interface for Communication and Control. using high resolution timer (Windows Multimedia Timer).” Computer-Human Interaction 2008. and S.displayed simultaneously on the screen. B. REFERENCES [1] J. and selection of a computer with high CPU rating.. Figure 7. as indicated by the size of animation object as well as the number of animation object displayable simultaneously on the screen without disturbance. Dennis McFarland. Shi Dong Zheng. Comparison of optimum animation (LED) size for three computers in four different procedures: 1)Single non-textured object. The optimization strategy in this paper is focused on the display performance of the stimulator and the speller program. Ding. Vol 16:1016-1029. ‘Siren’. and Acer Aspire 5510 has the value of 3. Vol. Future work should address these two concerns. Y.3 sec. Trolltech. Brendan Allison. X. Gao. N. Universität Bremen. Florence. Wolpaw. mean ITR 13.1.63 sec. 2)Single checkerboard-textured object. 2006.6. optimizing the multi-thread feature of a dualcore processor. During the experiment.” IEEE Transactions on Neural Systems and Rehabilitation Engineering. "Multiple Channel Detection of Steady-State Visual Evoked Potentials for BrainComputer Interfaces. and J. However. Florence. No. We summarize the optimization strategy as follows: using advance graphics driver technology (DirectX and OpenGL). "Towards an Independent Brain-Computer Interface Using Steady State Visual Evoked Potentials.“ Cerebral Cortext . We have tested our system on 106 subjects during CeBIT 2008 in Hannover.” Trolltech Whitepaper. IV.

International Conference on Computer Engineering and Technology Session 4 .

.

We may think the page is composed by the different content block. It can effective carry on the division to the homepage. is called the picture homepage. It has been a new research direction in recent years. good adaptability and high accuracy extraction. and it is considering the structure and the content of web pages. There are many advantages Considering the traditional HMMs limitation that the approaches only consider the semantic term as observed emission feature.we can consider the Web Page Segmentation from the content and structure of web page . The so-called page segmentation is to divide a subject page into each region. After the Web page segmentation. layout. the output of each observation is released by the term shift. One is label page. China E-MAIL: wangjing@mail. These methods are mainly considering the layout and structure of web page. it is more suitable for web page's features taking these factors into consideration. this algorithm operation efficiency enhanced by 14. if the relevant text is the hyperlink indicated that it will be different from the color of the text in general. THE ANALYSIS OF WEB PAGE I. The experimental result indicated that.Zhijing Liu1 1School of Computer Science and Technology. In this way . For example. But how to extract the needed information is a necessitous problem.1109/ICCET. including general navigation area. The most popular methods of web page segmentation are web DOM tree marking[1]. background color. So a method based on generalized hidden Markov model is presented in this paper. whose role is to generalize theme of its linked web pages. these methods are too reliant on the structure of web page. the theme labels area. Keywords: Web Page Segmentation. Page segmentation and recognition is an important part in the web page processing.xidian. And different character words can divide different content text.com Abstract—A method of page segmentation and recognition based on generalized hidden Markov model is present in this paper.2009 International Conference on Computer Engineering and Technology A Novel Method for the Web page Segmentation And Identification Jing Wang1. The text in each block is expressed with a vector form. italics. it has become the concern of the researchers. expresses a central subject together. II. which could be observed to help improve state transition estimation for HMM. Because of its easyestablished.the frequency of each word present in different content documents has certain rule that the higher frequency is. A Generalized Hidden Markov Model (GHMM). The structure of Web page It is the characteristics of web page that the logical content interdependence block can be organized together.2009. So we extract character words constitutes character vectors with VSM model. page location coordinates of entities[2]. according to the page content as well as the structural configuration. Because there is more embellishment for letter than text in web page. the theme of the text area. The labels of page can be classified into navigation label (navigation bar) and theme labels. Also some kind of homepages are composed by the few texts and pictures. and formatting) instead of single emission feature (term). According to the homepage manifestation and function. Web information contains other emission features such as format. Vector Space Model (VSM)[4] is considered 229 978-0-7695-3521-0/09 $25. headline fonts will be displayed over the text great. which is mainly composed of text hyperlinks. and the effect of segmentation has significantly improved. layout. and it is composed by the massive non-link text and the few hyperlinks text.cn. we use multiple emission features (term. Because . Xidian University.00 © 2009 IEEE DOI 10. But regarding take homepage content block as condition GHMM. the more importance it is. size. etc. compares with the original page segmentation algorithm. and background images. and we accurately deal with web information only when page is segmented nicety. the web pages can be divided into three categories.3%. also may be called the leader page. it would be easier to identify and automatic extraction for machines A. INTRODUCTION Along with the rapid and continuous development of the Internet technology. thus distinguishes each homepage block part with GHMM. One kind is the subject homepage. the algorithm of page segmentation and recognition is introduced in this paper. Xi’an 710071. The text attribute of Web page For the text processing by traditional HMM. Web Page Identifacation for web pages segmentation using HMM. A subject pages can be divided into different areas.149 . such as accurate modeling and high recognition rate. and VIPS page segmentation algorithm[3].edu. B. Therefore. There are other layouts such as font. Web already became in this world the biggest information origin. The text demonstrated which in the different block also can demonstrate because of its content according to certain form. the transition probability between the blocks is decided by the released observation characteristic. liuprofessor@163. but not take the features of web content in different areas into account. Therefore.

The state transition of GHMM In this model. π N } . ti . So we can establish a Hidden Markov Models of five states for the HMM. .s )⎥ ⎢ 1 1 12 2 ⎣ s=1 ⎦ ⎣ s=1 ⎦ ⎣ s=1 ⎦ For GHMM based learning problems. B. We consider the overall feature as a linear combination of these attributes.one of most popular model for representing the feature of text contents. wn ( d ). but the each observation vector belongs to a certain state. MS is the number of distinct observation symbols of the attribute s. s = Vk | qt = S j ) 1 ≤ j ≤ N . i2 . and k3 . a GHMM consists of the following five parts [6]: In GHMM an observation symbol k is extended from its mere term attribute to a set of attributes as k1 . I | λ) = (4) ⎡z ⎤ ∑αs ∗bisT (oT . j ≤ N e) B: the emission probability distribution. The GHMM defined as = (A.B . k2 . ) where ti denotes the i-th keyword and wi ( d ) is the weight of ti in document d . . So we can establish a hidden Markov model of five states. wn ( d ) are the relevant values of coordinates.1 ≤ k ≤ M (2) where N is the number of states in the model. all the observation vectors are divided into these five states. tn . A document d is expressed as a vector composed of a serial of cross words.1 ≤ i. where ot ∈ V . it does not affect the content of the division. I | λ ) = P (O | I . (1) B = B . A common goal of learning problems that use HMM is to figure out the state sequence that has the highest probability of having produced an observation sequence. In this model. I | ). A is N × N matrix of the state transition. b) M is the number of probability observable symbols. that is b j (k ) = ∑ α s ∗ b j (ks ) . tn refer to a system of n-dimensional coordinates. five states itself are not directly visible. In this model. which can be efficiently solved by Viterbi algorithm [5]. A = ( aij ) N × N . Although there respective position is may be different according to each website design style. And the related definition as follow: Formally. 0 ≤ α s ≤ 1 . Theme text. V = {V1 . λ ) P( I | λ ) = π i1 bi1 (o1 )π i2 bi2 (o2 ) π iT −1 biT −1 (oT −1 ) (3) Figure 1. b j (k s ) = P (ot . So the HMM is introduced. III.…… . the goal is to find the most probable state sequence using the following extended Viterbi algorithm. . w1 (d ). w1 ( d ). 1 • s • Z.s )⎥ ai ⎡ z ⎤ ⎡ z ⎤ T −1iT which correspond to format. S 2 . 1 ≤ i ≤ N . Here we denote an observation symbol k with attributes k1 . as follow: a) N is the number of states in model.s )⎥ ai i ⎢∑αs ∗bis (o2. k z . V (d ) = (t1 . ) find out the state transition sequence I = i1 . and suppose the state at time t is qt .all the observation vector is divided into five states. c) π : the initial state probability distribution. ∑α s =1 z s = 1 . which reflects the changes in HMM. V2 . but the each observation vector belong to a certain state.… . t2 . five state itself is not directly visible. Z is the number of attribute sets of observation symbols. the probability of observing the symbol v s k of the attribute s given that we are in state j.…… . P (O. the Websites Label regions and so on. iT which maximizes P (O. and π = {π 1 . Transition probability aij . . w2 ( d ). Namely.B Z } . In other words. These N states are S = {S1 . . Equation (3) now becomes P(O. B = (b (k )) j s N ×M . layout attribute and the text 230 . where π i = P (q1 = S i ). and the observable symbol at time t is ot . the goal is to find the most probable state sequence using the following extended Viterbi algorithm. Copyright. so we can know q t ∈ S . GHMM Web page generally can be divided into navigation. S N } .as the following figure1 shown. t1 . VM } . and T is the number of symbols observed. where aij = P (qt +1 = S j | qt = Si ).… . πi ⎢∑αs ∗bis (o1. which reflect the changes in HMM. User Interaction. k2 .1 ≤ s ≤ z . Emission probability b j ( k s ) . For GHMM based learning problems. d) A: the hidden state transition probability distribution. wi ( d ). HMM do use the method to record the differences of web page in different area. where α s is the weight factor for s =1 z the s-th attribute. π 2 . thus recording the individual difference between homepage different regions. 1 2 { . the probability of being in state j at time t+1 given that we were in state i at time t.

2004. we randomly selected 200 online pages. Term-weighting approaches in automatic text retrieval. 2006. format attribute. p 323–328.7% Method(3) 82. V.7% 92.5 Method(2) 75.8% 81. IEEE Proceedings of the 5th International Conference on Machine Learning and Applications. Weimin Xue.3% 82. a new page segmentation method based on improved GHMM is presented in the paper. Benson NE. we take the recall and accuracy ratio as evaluation standards. we use the F measure which combines recall (R) and precision (P). Because these two areas have significant features such as the text of related topics link are displayed in the form of hyperlinks.4% 89. Proceedings of the 6th World Congress on Intelligent.9% 61. (2) VIPS page segmentation method. When we compare segmentation results of the different class of web page. “Web Page Classification Based on VSM”. Detecting Web Content Function Using Generalized Hidden Markov Model. EXPERIMENTAL RESULT AND ANALYSIS Area of Web page Navigation Theme Text Theme Labels User Interaction Copy Right Website Label Method(1) 69. (1) DOM tagging tree method.THE THERE METHODS IN AVERAGE PRECISION Method Method(1) Method(2) Method(3) P 73. and the recall. Salton G.3% Based on the above discussion.5% 96. 28(2):98-102. Integrating the document object model with hyperlinks for enhanced topic distillation and information extraction. KeyGraph: Automatic indexing by co-occurrence graph based on building construction metaphor.8% The results indicate that the accuracy of improved method is improved by about 20 percent in navigation area and Theme Label area. Morgan Kaufmann. that is. In the future. 2006. Yachida M.6% 76. The novel method takes the layout and the content of pages into account.8% 79. we can see that the method based on GHMM has the highest recognition accuracy. Ping Zhong. The Viterbi algorithm of mixture of HMM2 [J]. 2007BAH08B02. ACKNOWLEDGEMENTS This research project was promoted by the National Science & Technology Pillar Program No.3 66. Osawa Y.2% 87.2% 70. of which the 150 are used as the training set. and layout attribute respectively. IV. Journal of Yunnan University. June 21-23. so it has the higher accuracy rate than other methods. (3) the method based on GHMM. α 2 . THE PRECISION OF THERE MENTHOD IN DIFFERENT AREAS OF WEB PAGE [5] [6] 231 . The results also show that the new method on the page segmentation and recognition accuracy is improved markedly.3% 70. The results are shown in table 2. Jinlin Chen.3% 81. CONCLUSION According to the characteristics of web pages. REFERENCES [1] Chakrabarti S. in order to verify this page segmentation and the identification method presented in this paper. p 211–220. Control and Automation. 0 ≤ α i ≤ 1 .6% F 74.4% 79. Terry Cook. Buckey C.4% 85.4% 81. Du Shi-ping.6111-6114. (5) b j ( k ) = α1 ∗ b j ( k1 ) + α 2 ∗ b j (k2 ) + α 3 ∗ b j ( k3 ) where α1 . Table 1 has given the comparison results of these three methods in average accuracy. precision and F measure are defined as: numbers of correct Web segmentation R= numbers of all Web segmentation numbers of correct Web segmentation P= numbers of possible Web segmentation 2× R × P F= R+P The experiment shows the compared result of the accuracy of three extraction methods. so that the model can be fully automated without lots of manual intervention.2% 78. pp. TABLE2.J82-D-I:391–400. 2006. and α1 + α 2 + α 3 = 1 . Proc Tenth International World-Wide Web Conference. we will improve the effective learning of GHMM. and α 3 are the weights for term attribute. We consider the emission probability as a linear combination of these attributes. TABLE 1.1% 83.2% R 76. Trans IEICE 2003.feature vector of VSM respectively.8% 85. Hong Bao. In: Readings in Information Retrieval.2 90. while the remaining 50 are used as the test set. [2] [3] [4] From table 1. Weitong Huang et al. 2005. Here.6% 93.

1. Introduction Nowadays. 2. kelp reaping. a double-axes system is needed for the balance of pitching and rolling. A number of tests are necessary to estimate the numerous hydrodynamic coefficients. Leave two blank lines after the abstract. It is unpractical to set up an accurate dynamic model. the fixed coordinate system E and motion coordinate system Oxyz are setup. a strong coupling will take place among the movements in each degree of freedom and a strong nonlinear movement exists always.2009 International Conference on Computer Engineering and Technology Disturbance Observer-based Variable Structure Control on the Working Attitude Balance Mechanism of Underwater Robot LI Ming (1) LIU Heping(2) School of Electronic Information Engineering. For an underwater robot with open-frame structure. The working mode ranges from cleaning. boldface type. thinking and decision-making and with a strong multi-function will be developed rapidly. At this time. at the top of the left-hand column as it is here. Robot is a complicated dynamic system with serious nonlinear property.00 © 2009 IEEE DOI 10.2009. As the device is a single-axis system. 1. and both origins are on the center of gravity of ROV. With the developing of our society and the progressing of science.edu. anti-terrorism and explosive-proof in military fields.1109/ICCET. This reduced the switching-gain of the control system greatly and weakened the chattering more of course. The center of gravity coincides with the center of buoyancy in x-y plane. When the robot moving. A sliding mode variable structure controller was designed to control the balance device. When not working. abstract is to be in fully-justified italicized text. more missions would be 978-0-7695-3521-0/09 $25. centered relative to the column. A practical result was achieved for the control of pitch change of the robot caused by the manipulators and detecting pan/tilt and indicated the practicability of the balance device including its control approach. and up to 150 words in length. gripping in civil fields to mining and mine sweeping. as shown in Fig. Use the word “Abstract” as the title. Intelligent underwater robot with ability of perceiving. the manipulator has to reach out. initially capitalized. An automechanism was developed to balance the pose of the robot. sampling. It’s difficult to make a maneuver for a pitched robot. the application of the underwater robot becomes more and more popular. The abstract is to be in 10-point.202 232 x G0 =x F0 yG0 =y F0 z F0  z G0 =B0 . School of Mechatronics Engineering and AutomationShanghai University ( ) lim@mail. the robot is on the balance position with tiny buoyancy. A distance B0 named metacentric height is designed in z axis to produce an uprighting moment. And the properties are different between static water and flowing water.haust.cn Abstract When an underwater robot is to work under the water. then begin the main text. finished by underwater robots to substitute aquanauts for human being. Henan University of Science & Technology (1) Department of Precision Machinery. single-spaced type. According to reference [1]. drilling. below the author information. cutting. the manipulator equipped by the robot furled usually beneath the robot shown in Fig. The control of position and attitude of underwater robot is an arduous task. in 12-point Times. in which a disturbance observer was used to observe the disturbances. This can change the center of the gravity and cause the robot pitching.

This makes the whole barycenter of the robot transfer from G0(xG0.zG). pulled by a step motor along the guiding rails. So. The barycenter of the manipulator arms varies from w0(xm0.E O y z Figure 1 Fixed and motion coordinate systems cosine law basically.yG0. a balance mechanism was developed (shown in Fig. lights and camera and the dragging of the neutrally buoyant tether can impose inestimably on the attitude of the robot.zG0) to G (xG. as shown in Fig.yG. the viscous resistance and additional mass make a influence on the movement. the manipulators reach out from the bottom of the robot to carry out various works. When adjusting the position of the robot for the convenience of manipulator’s work. the manipulator has only a change of the barycenter in x axis. it’s direct and rapid to control the attitude of the robot. When needed. The moment of inertia of the robot is Jy. During the process of reaching out from the bottom of the robot. x Figure 3 Reaching out of the manipulator and the operating of the detecting pan/tilt Figure 2 Underwater robot with manipulators and detecting pan/tilt two The robot is driven by propellers and moves to appointed position. the operating of the pan/tilt with sonar. The underwater robot test platform described in this paper has two manipulators symmetrical to x axis in structure. the position of the barycenter of the balance body is assumed as the origin. In addition. The move of the barycenter of the robot certainly causes a pitching on the robot. The turning equation of the robot is: J y T=  MT  m 2 X+f1 . the robot turns only around the y axis and other movements don’t exist. Modelling The mass of the balance body of the balance mechanism of the attitude is m2. The change of the position X of the balance body causes a shift of the center of gravity of the robot and the pitch angle T of the robot is changed accordingly. in which all of the disturbances are considering. it is difficult to operate a pitching robot accurately under the water. Assumed that when pitching and corrected. When the barycenter of the robot system is at the balance. To solve the problem.ym0) in original state to w(xm. 4) to balance and compensate for the move of the gravity center of the robot based on the pitching angle and angular velocity measuring by the attitude transducer. Why don’t adjust the center of gravity of the robot according to the change of the barycenter of the manipulator arms? Although the movement of the manipulator keeps to a 233 Figure 4 Balance mechanism of the attitude 2. 3.ym) in working state.

then: 0 .. e 2 =e 2  e 2 . the movement of the state of system is forced along the sliding mode surface by the switching of the control parameters. it can be thought as that: ª f º ª0 0 º ª f º ª 0 º ˆ ˆ ª k1 º ˆ « » « « »  « » u  « » > e2  e2 @ (3) » ˆ «e2 » ¬1 a ¼ ¬e2 ¼ ¬1¼ ¬k 2 ¼ ˆ ¼ ¬ The sliding mode controller is defined as: f=0 . the viscous resistance and additional mass caused by the move of the manipulators. the sliding mode is unchangeable and robust. The disturbance produced by the dragging of the neutrally buoyant tether is also included. (2) Putting into (1) yields to the error state equation: ª0 1 º ª e1 º ª 0 º ª0º  « »u  « »f « 0  a » « e » 1 ¬ ¼¬ 2¼ ¬ ¼ ¬1 ¼ That is e 2 ae 2  u  f ss=(c  a)s2  se1 (c(a  c)  ])  K s  fs d (c  a)s2  se1 (c(a  c)  ])  ( f Consequently: max According to the theory of sliding mode variable structure [2]. detecting pan/tilt. It’s attainable that: Putting (4) into it.  K) s (7) V d (c  a)s2  se1(c(a  c) ])  (f 1 In addition max  K) s V2 = 1 f f  e2e2 K1 1 ˆ ˆ = f( f  f)  e 2 (e2  e2 ) K1 From (3). The ˆ ˆ f=k1 (e 2  e 2 ) k1e 2 and ˆ ˆ ˆ ˆ ˆ ˆ e2 f  ae2  u  k2 (e2  e2 ) f  ae2  u  k2e2 As the disturbances produced by the moves of manipulators and pan/tilt as well as the dragging of the neutrally buoyant tether are slow movements. e 2 e1 T . But the chattering of the sliding mode control is inevitable when the control is switching. Disturbance Observer-based Variable structure Control The desired value of pitching angle is Td the error and its derivative are defined as: From equation (5) and (2). 234 . To reduce the chattering. Let a M / J y . and V1 1 2 s 2 V2 = 1 2 1 2 f  e2 2K1 2 (6) Lyapunov function is selected as: T=  aT  u+f (1) V=V1  V2 s=ce1  e2 s 1 2 1 2 1 2 s  f  e2 2 2K1 2 ce 2  ae2  u  f 3. Putting into (8) together with (2). u m 2 X / J y and f f1 / J y The equation can be re-written as: The sliding line is chosen as: s=ce1  e2 The analysis of stability is shown as follow: (5) ˆ ˆ Let f =f  f . a disturbance observer is adopted to estimate the disturbances so that the chattering can be weakened [3] [4] [5] [6].then (8) ˆ ˆ f be the estimate of the disturbances while e 2 denote the estimate of e 2 and the gains k1 and k2 are Let selected by the pole placement disturbance observer is designed as method.where M is the hydrodynamic coefficient when turning around y axis and f1 is the disturbances including the change of the moment of inertia . then ˆ ce2  ae 2  ]e1  K sgn(s)  f  f =(c  a)e2  ]e1  K sgn(s)  f =(c  a)(s  ce1 )  ]e1  K sgn(s)  f =(c  a)s  c(c  a)e1  ] e1  K sgn(s)  f =(c  a)s  e1 (c(a  c)  ] )  K sgn(s)  f e1 Td  T T e2 ª e1 º «e » ¬ 2¼ e1 T . then: ˆ u=]e1  K sgn(s)  f Where ] = ® (4) ­D ¯E se1 ! 0 se1  0 and K represents the switching gain. balance object. When the change of the parameters of control object and the external disturbances are applied.

5 0 -0.5 -1 -1.3 0. which produces a max pitching moment up to 0.5 0.11N.m~ 0. Simulation 1 Figure 6 Control input The control Algorithm was simulated on Matlab®/Simulink® [7][8].1 0 0.m.1 0.4 0. It is the adoption of the disturbance observer that control input 150 100 ˆ the condition of K t f  f could be satisfied so max 50 easily that the switching gain K is depressed greatly and weaken the chattering therefore. V d 0 had been be verified.V2 =  1 ˆ ˆ ˆ f f  e2 (  ae2  u  f  f  ae2  u  k 2e2 ) K1 ˆ ˆ =  f e +e (ae  ae  f  f  k e ) 2 2 2 2 2 2 State response 1 0.2 e1 0.3sin(0. The pitching of high frequency induced by wave is not taken into account. In order to balance the moment as well as the other disturbances.2kg and slide a distance from 50 to 250mm.5St) The results of the simulations presented here are promising shown in Fig.45N. [9][10] Assumed that the disturbance applied upon the robot obey a law of sines.5 to Fig.e2 plane trajectory f=5  0. 0 -1 -2 e2 -3 -4 -5 -6 -0.55N. The mass of the robot is 175kg.6 Figure 7 e1 .5 -2 -2. 0 0 1 2 3 t(s) 4 5 6 7 4. The center of gravity of the manipulator arms that has a mass of 0. This means that the sliding mode control is stable. it can be found that: while equation (10) 0 1 2 3 t(s) 4 5 6 7 if se1 t 0 if se1 d 0 Figure 5 Response of the error state (10) 250 200 is satisfied. which will cause a balance moment from 0. Since it is the pitching of the whole robot induced by the movements of the manipulators.m.5 -3 -3. the balance object is designed to have a mass of 2. detecting pan/tilt and the dragging of the neutrally buoyant tether that has a slow velocity.5 2 =  f e2  ae2  e2f  k 2e2 ) 2 V2 =  (a  k 2 )e2 2 k 2 t a atc ­D t c(a  c) ]= ® ¯ E d c(a  c) Kt f k1 ! 0 max (9) From equation (7) and (9). the condition of slow velocity mentioned-above for sliding mode controller is satisfied. 235 .5kg varies from 0 to 900mm while working.8.

References 236 . The detecting pan/tilt pitched at a angular velocity of 60 /s(without rolling movement). The amplitude of the switching gain was reduced greatly and the chattering in the sliding mode control was weakened certainly.which are greatly appreciated by the authors. the pitching angle would be compensated within 4. a balance mechanism was developed and a sliding mode controller was designed to control the device. National Natural Science Foundation of China (Project No.6 degrees at 0. Y0102). Meanwhile a disturbance observer was defined in the controller to estimate the disturbances.9. The control goal had been implemented basically. Figure 9 Pitching change caused by the moves of the manipulator and pan/tilt without the work of balance control 8.10. Acknowledgment The research projects concerned in this paper were financially supported by State Leading Academic Discipline Fund and Shanghai Leading Academic Discipline Fund of Shanghai University (Project No. A good effect had been achieved upon the pitching of the robot induced by the manipulator and detecting pan/tilt.4 second.5 seconds when the balance mechanism was working as shown in Fig. While the balance mechanism didn’t work. As it is a single axis device. detecting pan/tilt and tether. This shows that the balance mechanism as well as the control approach is applicable. 7.7 6 5 f and ef 4 3 2 1 0 0 1 2 3 t(s) 4 5 6 7 Figure 8 Estimate of the disturbance 5. The manipulator turned out from the bottom of the robot at a angular velocity of 150 /s. a two DOF balance mechanism is needed to compensate the pitching round the y axis and the rolling round the x axis. The max pitching angle reached only 5.3 degrees as shown in Fig. BB 67 and No. Experiment The operating experiment of the underwater robot was made in a static water pool. Figure 10 Pitching change of the robot controlled by the balance mechanism 6. the pitching can only be compensated around the y axis. 60605028) and The National High-tech Research and Development Program (Project No. Conclusion Aimed at the pitching of the robot caused by the move of the manipulator. For the robot with a barycenter disturbance along the y axis. the max pitching angle of the robot would reach about 10. Nevertheless. 2007AA04Z225).

Zhao Y. Tsinghua University Press. 446-451. Variable Structure Control System. [9] Heping LIU. and El Hajjaji. v 90. “The Anti-wave Control of Small Open-frame Underwater Robot”. H. “Robust controller based on friction compensation and disturbance observer for a motion platform driven by a linear motor”. “Upper Bound Adaptive Learning of Neural Network For The Sliding Mode Control of Underwater Robot”. 1997. Huang J. [7] Yu. MED.. [3] Chang Jeang-Lin.. pp. Chongqing. and Wu H S. pp. 1994.[1] Li D. MATLAB Simulation for Sliding Mode Control. Proceedings of the Institution of Mechanical Engineers. [6] Wang Ying.493-502. Chongqing University Press.. 2007 Mediterranean Conference on Control and Automation. Xiong Zhenhua. pp. Wu Tsui-Chou Robust disturbance attenuation with unknown input observer and sliding mode controller Electrical Engineering. pp. Mohammed. MED.Z. pp. n 7. Proceedings of the Institution of Mechanical Engineers. Harbin. Q.P. pp. Zhenbang GONG. Hiroshi. Phuket. Kiyoshi Chattering reduction of disturbance observer based sliding mode control IEEE Transactions on Industry Applications. v 222. 653-658. Sakamoto.33-39. [2] Yao Q. Part G: Journal of Aerospace Engineering. Xiamen. v 220. 2008. and Ding Han. Atsuo. 2005. “Simulation study on a friction compensation method for the inertial platform based on the disturbance observer”. Mar-Apr. [4] Kawamura. 237 . Proceedings of the International Conference on Advanced Computer Theory and Engineering . v 30. 4433910. Itoh. [10] Heping LIU. Beijing. n 1. n 2.. 2006. Proceedings of the 3th International Conference on Intelligent System & Knowledge Engineering. S. 2008. 2007. Chadli. Harbin Engineering University Press. 2008. 2007 Mediterranean Conference on Control and Automation. Part I: Journal of Systems and Control Engineering. 341-346. Zhenbang GONG. [5] Oudghiri. September. Motion and Modelling of Ship. Ahmed.456-461. Mohammed. [8] Liu J K. n 3. pp. “Lateral vehicle velocity estimation using fuzzy sliding mode observer”. 1999. 2008.

IFFT. This recovers frequency diversity and improves the BER performance.India Kulkarni. In case of OFDM (orthogonal frequency division multiplexing). Dept of ECE *Institute of Technology & Management *Gurgaon. The third block transforms the symbols into time-domain using inverse fast Fourier transform (IFFT) at the transmitter. Un-coded OFDM loses all frequency diversity inherent in the channel: a dip in the channel erases the information data on the subcarriers affected by the dip and this information cannot be recovered from the other carriers. II. Two different adaptive modulator/demodulator pairs are considered in this paper: In modulator A. sufficiently strong coding spreads the information over multiple subcarriers. This enables the use of cheaper power amplifier as compared to OFDM system. the FFT and its inverse are used at the input and output of the frequency domain equalizer in the receiver. the cyclic extension is removed and the signal is transformed back into frequency domain with an FFT. A description of a single carrier system with frequency domain equalization in section III is followed by simulation results in section IV.1109/ICCET. The modulator either uses a fixed signal alphabet (QAM) or adapts the signal alphabets of the individual OFDM sub-carriers.2009. the individual sub-carriers are modulated with fixed and adaptive signal alphabets.in aroraprabh@gmail. In case of OFDM.00 © 2009 IEEE DOI 10. Both. Binary data is fed to a modulator which generates complex symbols on its output. Single carrier modulation uses a single carrier. This in turn means that a SC system requires a smaller linear range to support a given average power. a frequency-independent as well as the optimum power distribution are used. The modulation schemes have to be adapted to the prevailing channel transfer function. LOS. the inverse FFT transforms the complex amplitudes of the individual subcarriers at the transmitter into time domain. Each modulation scheme provides a trade off between spectral efficiency and the bit error rate. I. The performance of OFDM can be improved significantly by using different modulation schemes for the individual sub-carriers. instead of the hundreds or thousands typically used in OFDM. Adding 978-0-7695-3521-0/09 $25. The spectral efficiency can be maximized by choosing the highest modulation scheme that will give an acceptable (BER). signal alphabets and power distribution can be optimized corresponding to the channel transfer function. India inderjeetk@gmail. so the peak-to-average transmitted power ratio for single carrier modulated signals is smaller. The performance of single carrier and multi-carrier modulation schemes will be compared for a frequency-selective fading channel considering un-coded modulation scheme. 1.243 238 . it is assumed that the instantaneous channel transfer function can be estimated at the receiver and can be communicated back to the transmitter.com kamalthakur12@gmail. the distribution of Keywords. At the receiver. Delhi. Frequency-selective fading caused by multipath time delay spread degrades the performance of digital communication channels by causing intersymbol interference. Different single carrier and multi-carrier transmission systems are simulated with time-variant transfer functions measured with a wideband channel sounder. thus results in an irreducible BER and imposes an upper limit on the data symbol rate.OFDM. The output signal is transmitted over the radio channel. the signal is equalized in frequency domain with the inverse of the transfer function of the radio channel corresponding to a zero-forcing equalizer. At the receiver the inverse operation is carried out.India.com daya_gupta2005@yahoo.com mkuldce@gmail. This mechanism results in a poor Bit Error Rate (BER) performance. QAM. Propagation measurements of radio channels with fixed antennas show that the transfer function varies very slowly with time. Abstract— The aim of the present paper is to compare multicarrier and single carrier modulation schemes for wireless communication systems. frequency selective fading can result in large variation in the received power of each carrier. Because of this reason.com.co. In case of single carrier modulation. minimum mean square error. The paper is organized as follows: In section II the fixed and adaptive OFDM transmitters are described. BER.M# Gupta Daya$ Arora Prabhjyot* Dept of ECE Dept of CSE Dept of ECE # $ National Institute of Technology Delhi College of Engineering # $ Suratkal. In a multipath radio channel. INTRODUCTION In this paper the wideband frequency-selective radio channels is used for investigating the transmission of digital signals. Prior to demodulation. ADAPTIVE OFDM TRANSMISSION The block diagram of the OFDM transmitter used is shown in Fig. Furthermore. In both cases the fast Fourier transform (FFT) and its inverse are utilized.2009 International Conference on Computer Engineering and Technology ADAPTIVE OFDM Vs SINGLE CARRIER MODULATION WITH FREQUENCY DOMAIN EQUALIZATION Kaur Inderjeet* Thakur Kamal* Dept of CSE. The next block inserts the guard interval.

3. a block wise signal transmission has to be carried out. both modulators yield the same distribution of bits. a basic difference between the single and multi-carrier modulation schemes: In case of the single carrier system. and 256-QAM. (w. The results of the optimization processes of both modulator A and modulator B are shown in Fig. SINGLE CARRIER TRANSMISSION WITH FREQUENCY DOMAIN EQUALIZATION The lower part of the block diagrams in fig. the output signal of a single carrier transmitter shows a small crest factor whereas an OFDM signal exhibits a Gaussian distribution. The main difference is that the block “inverse FFT” is moved from the transmitter to the receiver [l]. the distribution of bits and the distribution of signal power with respect to frequency. t ) + 1 H(w. In case of the single carrier system. the power distribution and SNR is shown for both modulators.. The main advantage of single carrier modulation compared with multi-carrier modulation is the fact that the energy of individual symbols is distributed over the whole available frequency range. III. This means that 0. t) denotes the time-variant transfer function of the radio channel. t ) S / N | r ( w. 3 . Modulator B optimizes the power spectrum and distribution of bits simultaneously. a minimum mean square error (MMSE) equalizer is used for the single carrier system. for single carrier modulation a fixed symbol alphabet is used in order to realize a constant bit rate transmission. The figure shows that the basis concepts for single carrier modulation with frequency domain equalization and OFDM transmission are almost similar. whereas in case of the multi-carrier system the decision is carried out in frequency domain. The algorithm for modulator A maximizes the minimum (with respect to all sub-carriers) SNR margin (difference between actual and desired SNR for a given error probability). single carrier modulation and OFDM without adaptation exhibit the same complexity. In case of modulator A. t ) = S / N | r ( w. Therefore. For comparison. Modulator B optimizes simultaneously both. For large SNRs the MMSE equalizer turns into the zero-forcing equalizers which mu1tiplies with the inverse transfer function. 2. 8 bit per subcarrier and FFT block can be transmitted. There is however. 1 Block diagram of a) an OFDM and b) a single carrier transmission system with frequency domain equalization bits on the individual sub-carriers is adapted to the shape of the transfer function of the radio channel. Furthermore. Because of this reason. 4-PSK. In order to get a minimum overall error probability. 128-QAM. 2-PSK. Therefore.dua1 subcarriers on1 all the samples in time domain. the decision is carried out in time domain. narrowband notches in the transfer function have only a small impact on error probability. The obtained SNR margin is the maximum possible so that the error probability becomes minimum. For the specific example presented in Fig. the distribution of bits is carried out in an optimum way so that the overall error probability becomes minimum. a periodic extension (guard interval) is required in order to mitigate interblock interference.t) depends on the SNR of the respective subcarriers S/Nlr(w. a zero-j-forcing equalizer shows a poor noise performance. 32-QAM. 1. In contrast to adaptive OFDM. 16-QAM. Like in an OFDM system. Since also in case of single carrier modulation the FFT algorithm is used. t) at the input of the receiver: H e ( w. modulator B calculates the optimum distribution of power and bits. 3. 239 .. The algorithms for the distribution of bits and power are described in [7]. 1 shows the considered single carrier transmission system. 64-QAM. the error probabilities for all used sub-carriers should be approximately equal. This inverse FFT operation spreads the noise contributions of all the indivi.Fig. the upper diagram gives the absolute value of the transfer function. Furthermore. The adaptive modulators select from different QAM modulation formats: no modulation. The transfer function of the equalizer H. an inverse FFT operation is located between equalization and decision. t ) 1 ⋅ H ( w. Therefore. 8-QAM. Since the noise contributions of highly attenuated Sub-carriers can be rather large. The result of modulator B is that the same SNR margin is achieved for all sub-carriers.

a high gain (7 to 9 dB) compared with fixed OFDM is obtained.5 dB is achieved using an optimized power spectrum for OFDM instead of a frequency-independent.74 µs Table 1: Parameters of radio channel propagation measurements Simulation results for four typical radio channels at a carrier frequency of 1. Furthermore. But adaptive OFDM exhibits also some disadvantages: The calculation of the distribution of modulation schemes causes a high computational effort. only transmission systems with the same average data rate are compared. 240 1. the user terminal antenna was moved over a distance of 1 m with a low velocity. 2] that OFDM with fixed modulation schemes shows approximately the same performance as single carrier modulation with frequency domain equalization. The figures show the bit error ratio as a function of the average transmitted power. a significant gain (5 to 6 dB) is obtained from adaptation. This can be explained by the fact that in the adaptive system. Measurement Distance of antennas Propagation conditions Base station antenna User terminal antenna Carrier frequency Average attenuation Delay spread 1 100m LOS Omni directional fixed Omni directional fixed 2 100m LOS Omni directional fixed Omni directional mobile 3 250m NLOS Sectional fixed Omni directional fixed 4 250m NLOS Sectional fixed Omni directional mobile The results show that an enormous improvement in performance (12 to 14 dB) is obtained from OFDM with adaptive modulation.41 µs 66. linear power amplifiers with high power consumption have to be used. Table 1summarize the parameters for all measurements. The temporal location of the FFT interval with respect to the cyclic extension at the receiver (i. .5 dB 0. it is recommended to use a constant power spectrum in order to save computational or signaling effort.QAM (bandwidth efficiency: 4 bit/symbol) is used for single carrier modulation and fixed OFDM (systems 1 and 2). Additional simulations show that the gain from adaptive modulation increases when higher-level modulation schemes are used. In case of the LOS channels single carrier modulation yields only a signal gain of 1 to 2 dB. Particularly in the NLOS case with single carrier modulation. bad channels are not used or only used with small signal alphabets so that a small amount of interblock interference is not critical. an OFDM signal exhibits a Gaussian distribution with a very high crest factor. but the gain is smaller than in the NLOS case. Additionally. the average bandwidth efficiency is the same as in case of fixed modulation. If channel coding is included in the transmission system also.43 µs 105. 2 and 3. In case of the mobile scenarios (measurements 2 and 4).8 Mbps 6 dB 2·105 Table 2: Simulation Parameters For the LOS measurements also. This results from a higher coherence bandwidth of the LOS radio channel transfer function. 16. it has been shown in [1. adaptive OFDM is less sensitive to interblock interference due to an insufficient long guard interval than fixed OFDM and single carrier modulation [7]. For both.e.1 dB 0. the channel must not vary too fast because of the required channel estimation. For all transmission systems a complex base band simulation is carried out with ideal channel estimation and synchronization.4 dB 0. Adaptive OFDM shows also a significant gain compared with single carrier modulation.34 µs 112 dB 1. Because of this small difference. A rapidly varying channel causes also a high amount of signaling information with the effect that the data rate for the communication decreases. No over sampling was used since only linear components (except the detectors) are assumed in the transmission systems.8 GHz 77. QAM schemes with different bandwidth efficiencies are used. SIMULATION RESULTS In the present paper following systems are compared using measuring transfer function of the channels: i Single carrier modulation with minimum mean square error (MMSE) frequency domain equalizer ii OFDM with fixed modulation iii Sub-carriers and frequency-independent power distribution. In case of adaptive modulation. The main parameters of the simulations are shown in Table 2. OFDM with optimized modulation schemes and frequency-independent power distribution (modulator A) iv OFDM with optimized modulation schemes and optimized power distribution (modulator B). Examples of the simulation results are presented in Fig. Furthermore. Therefore.IV. the time synchronization of the OFDM blocks) is optimized so that the bit error ratio becomes minimum. single carrier and multi-carrier modulation. Therefore. In all the examples shown. But only a gain of less than 0. Length of FFT interval Length of guard interval RF bandwidth Average data rate Noise figure of the receiver Number of transmitted bits 256 samples 50 samples 5MHz 16.8 GHz are presented.

VI. But adaptive OFDM outperforms single carrier modulation by 3 to 5 dB. REFERENCES [1] H. the simulation results yield no significant differences between radio channels with fixed and mobile user triennial antennas. Since NLOS radio channels exhibit usually higher attenuation. the required signa1 power can be reduced dramatically compared with fixed modulation. In addition to the modulation schemes (bit distribution) also the power distribution of adaptive OFDM can be optimized. But simulations reveal that from the optimum power distribution only a small gain of less than 0. KARAMA. this property is of particular advantage. the latter can be combined with antenna diversity using maximum ratio combining [8]. In 241 . CONCLUSION By using adaptive modulation schemes for the individual sub-carriers in an OFDM transmission system. V.G . Simulations show that for a bit error ratio of a gain of 5 to 14 dB can be achieved depending on the radio propagation scenario. Also with single carrier modulation a significantly better performance is obtained than with OFDM with fixed modulation schemes. it is recommended to refrain from optimizing the power distribution since either additional computation or additional signaling for the synchronization is needed. With adaptive OFDM and single carrier modulation. Furthermore. higher gains . Therefore. In order to improve the performance of single carrier modulation. ND I.Fig 2: Simulation results for a line-of-sight (LOS) radio channel with a) a fixed (measurement 1) and b) a mobile (measurement 2) user terminal antenna Fig 3: Simulation results for a non-line-of-sight (NLOS) radio channel with a) a fixed (measurement 3) and b) a mobile (measurement 4) user The better performance of adaptive OFDM compared with single carrier modulation results due to the capability of adaptive OFDM to adapt the modulation schemes to subchannels with very different SNRs in an optimum way.compared with conventional OFDM-are obtained for NLOS channels than for LOS channels. SARI.5 dB is obtained. JEANCLAUDEA: An analysis of orthogonal frequency-division multiplexing for mobile radio applications.

pp. KADEL: Diversity and equalization in frequency domain . In Proceedings of the Globecom '94 in San Francisco. In IEEE International Conference on Communications. HIROSAKAI. BINGHAMA: practical discrete multitone transceiver loading algorithm for data transmission over spectrally shaped channels. pp. A. Bologna. JEANCLAUDEF: frequency-domain equalization of mobile radio and terrestrial broadcast channels. [4] D.G . WATANABEA: 191. In Proceedings of the GLOBECOM '96. INOUE. pp. M. 16351639 (2006).Proceedings of the VTC '94 in Stockholm. .2 kbps voice band data modem based on orthogonally multiplexed QAM techniques. [3] B. IEEE Trans. HASEGAWAK. on Communications 43 (1995). [6] CZYLWIKC: Comparison of the channel capacity of wideband radio channels with achievable data rates using adaptive OFDM. 242 .. 1-5 (2004). In Proceedings of the 5th European Conference on Fixed Radio Systems and Networks ECRR '96. HUGHES-HARTOGS: Ensemble modem structure for imperfect transmission media. KARAM AND I. 773-775. 238243 (1996). G. AND K. C. pp. . CIOFFI AND J. YOSHIDA0. [5] P. U. S.227 (1987). Phoenix (1997). London. 713-718 (1996). TANAKAS. SARI. [2] H. pp. 661-665 (1995). pp. S. CHOWJ. In Proceedings of the IEEE Vehicular Technology Conference '97.679. [7] CZYLWIK: Adaptive (OFDM for wideband radio channels.a robust and flexible receiver technology for broadband mobile communication systems. Patent 4. .

respectively. (3) . This dependence of geometric structure 1 results in the fact that the dimension of S 3 (Δ) over an arbitrary triangulation becomes an open problem though several results have been obtained for some special triangulations. For each ξ ∈ Dd. boundary vertices. [2].v(l) . 0 If S is a linear subspace of S d ( ). let λξ be the 0 linear functional such that for any spline s ∈ S d ( ).1109/ICCET. for an arbitrary triangulation Δ.15 243 v(l) .Δ . P. y)|T (l) = i+ j+k=d 1 ci jk v(l) . . v(l) ∈ T . γ) is the barycentric coordinates of (x. y) with respect to the triangle T (l) . China luweiping06@163.. It is clear that each B-net coef1 ficient ci jk point (1) where T (l) is a triangle in Δ and Pd is the linear space of bivariate polynomials of total degree d. interior edges. In this paper.R.2009. the dimension of the space S 3 (ΔW ) is determined and a set of dual basis with local support is constructed. see Diener [3] and Morgan and Scott [10]. β. y) = αv(l) + βv(l) + γv(l) . called Wang’s refinement.00 © 2009 IEEE DOI 10.v(l) . is said to be a determining set for S provided that s ∈ S and λξ s = 0 for all ξ ∈ M implies s ≡ 0.2009 International Conference on Computer Engineering and Technology A bivariate C 1 cubic spline space on Wang’s refinement Huan-Wen Liu Faculty of Mathematics & Computer Science Guangxi University for Nationalities Nanning 530006. a common edge or a vertex. such as Farin [5].v(l) . E B and F denote the set of vertices. Let V. VB . 978-0-7695-3521-0/09 $25.v(l) 2 3 d! i j k αβ γ . λξ s = the B-net coefficient cξ of s at point ξ. Δ is a set of closed triangles whose union coincides with Ω such that the intersection of any two triangles in Δ is either empty. . we consider a kind of refined triangulation ΔW which was first proposed by Wang [13]. accord1 2 3 ing to the theory of Bernstein-B´ zier polynomials by Farin e [4].e. E. By using the technique of minimal determining 1 set. As we know. the space of bivariate splines over the triangulation Δ is defined by r S d (Δ) = {s ∈ C r (Ω) : s|T (l) ∈ Pd . 2. Introduction Let Δ be a regular triangulation of a simply connected polygonal domain Ω in R2 . l = 1. . is determined. Given 0 ≤ r < d. VI . i! j!k! (2) where (α.v(l) v(l) . Notation and Preliminaries 1. the restriction of s in T (l) can be expressed as s(x. v(l) . China mengtian29@163. respecr tively. by using the technique of B-net method and the minimal determining set. interior vertices. then M ⊆ Dd. The dimension of space S d (Δ) with low degree d verse smoothness r is quite difficult to determine and poorly understood. boundary edges and triangles in Δ. P.com Abstract In this paper. 1 2 3 1 and ci jk 2 3 are called the B-net coefficients of s(x. y) with respect to the triangle T (l) . 1 2 3 v(l) .com Wei-Ping Lu Department of Computer Science Guangxi Economic Management Cadre College Nanning 530006. 2. E I . .1 Preliminaries r For any s ∈ S d (Δ) and T (l) := v(l) . α + β + γ = 1.v(l) 2 3 is associated with a corresponding domain := (iv(l) + jv(l) + kv(l) )/d. the dimenr sion of space S d (Δ) is known only for those cases with d ≥ 4r + 1 by Alfeld and Schumaker [1]. the dimension of the space of bivariate C 1 cubic spline functions on a kind of refined triangulation. 2. d ≥ 3r + 2 by Hong [6] and d = 4 and r = 1 by Alfeld et al. defined by (x.v(l) . the set of all the domain points ξi jk 2 3 will be denoted by Dd.v(l) 1 ξi jk v(l) . since it depends not only upon the topological variants but also the geometrical shape of the triangulation as pointed out in several references. |T |} . .v(l) . Lai [7] and Liu [9].v(l) 2 3 1 For convenience. i.R. edges. where the 2 dimension of the bivariate C 2 quintic spline space S 5 (ΔW ) was given. and a set of dual basis with local support is given.

3. We now introduce the C 1 smoothness condition between two adjacent triangles. l = 1. where M denotes the cardinality of M.w(l) . and w(l) in 1 2 3 each triangle T (l) respectively. v2 ]. v2 . γ) is the barycentric coordinates of v4 with respect to the triangle T (1) . 2.v3 ] + βc[v1 . (l) 1 Figure 2: A minimal determining set for S 3 (T W ) marked with . l = 1. w(l) . y) = ci[v1 . we establish a theorem concerning MDS (l) 1 for S 3 (T W ). w(l) . 2. γ1 ) and (α2 . Let M be the set of domain points associ(l) 1 ated with the following 16 B-net coefficients for S 3 (T W ): t 1) ci jk v(l) . v(l) . i! j!k! 2 2 2 (4) (5) ci[v1 . . v(l) (l = 1 2 3 1. In this section. Join v(l) to w(l) and v(l) to w(l) . t = 1. j j+3 We therefore obtain the refined triangulation ΔW = (l) |T | l=1 T W of the original triangulation Δ. 244 (l) 1 Proof : Suppose that the B-net coefficients of S 3 (T W ) listed in 1) . t = 1. .v(l) . (l) 1 is a MDS for S 3 (T W ). β2 . Lemma 2. 3.v2 . respectively.w(l) t t+1 t+2 . must be zero. all the t t t+1 t+2 B-net coefficients associated with the domain points in D1 (v(l) ).v2 .v2 . · · · . j = 1.w(l) . 3. e j ( j = 1. y) ∈ P3 in T (1) . |T |. y) ∈ C 1 (T (1) T (2) ) if and only if the following holds: ci[v1 .v4 ] = c[v1 .v4 ] jk i+ j+k=3 where (α1 . M is called a minimal determining set (MDS) for S if there is no other determining set for S with the cardinality being smaller than M. i + j = 2. v2 . It is easy to see that if M is a MDS for S . 3. · · · . := w(l) . Take three interior points w(l) . 2.v3 ] .v2 . w(l) 1 2 4 5 points are marked with . v4 ] be two adjacent triangles in R2 with a common edge [v1 . i > 1.v2 . Then M 2. where w(l) = 1 v(l) + 4 v(l) + j 7 j 7 j+1 2 (l) 7 v j+2 and v(l) := v(l) . 2. and with p2 (x. 2. 3. and . |V|) be vertices of Δ. β. y) = p2 (x.1. 3. let T W shown in Figure 1 be a (l) refining subdivision of T formed by the following steps: Step 1. β1 . j0 i j0 [v1 ci[v1 . t . t = 1.v4 ] = αci+1. which is a special case of the general C r smoothness condition given by Farin [4]. 2. |T |.w(l) t 3) c111 t+1 t+2 . Theorem 3. then i) By using the C 1 smoothness conditions (Lemma 2. and T (l) = v(l) .1) along the edges v(l) . w(l) := w(l) . 3. . · · · . j j+3 Step 2. assume p1 (x. then dim S = M.v2 . 2.v3 ] . T (2) = [v1 .1. i + j + k = 3. |T |) be triangles of Δ. j = 1. 2. · · · . y) agrees with p1 (x. Then p(x. (l) where v4 := v(l) . 2. j j j j+1 where w(l) := w(l) . Let T (1) = [v1 .w(l) 1 4) c111 2 3 .v2 .w(l) t+1 t+2 v(l) . w(l) . 2. 2. |E|) be edges of Δ.v3 ] + γc[v1 . v3 ]. · · · . i! j!k! 1 1 1 3! i j k αβ γ. j0 i j+10 i j1 j1 (6) where (α. y) with respect to the triangles T (1) and T (2) . t = 1. Suppose p(x. 2. respectively.v3 ] jk i+ j+k=3 (l) Following Wang [13]. γ2 ) are the barycentric coordinates of (x.v2 . (l) 1 3 A minimal determining set for S 3 (T W ) 3! i j k αβ γ. these domain 1 and in Figure 2. w(l) . v(l) .2 Wang’s refined triangulation Let vi (i = 1.w(l) . t = 1.v (l) 1 v1(l) b w(l) 3 w w(l) 1 (l) 2 w3(l) c d a w2(l) w(l) 1 (l) v3 v2(l) v2(l) v3(l) (l) Figure 1: The refined triangulation T W of a single triangle.4) are set to zero. y) ∈ P3 in T (2) . 2) c111 v(l) . i + j = 3.

Let Δ be an arbitrary triangulation and ΔW be the Wang’s refined triangulation of Δ. w(l) . 1 we have dim S 3 (ΔW ) ≥ 3|V| + |E| + 4|T |.1 yield that 1 dim S 3 (ΔW ) = 3|V| + |E| + 4|T |.v(l) . −1). These domain points are marked with and in Figure 3. Which together with Theorem 4. we obtain d = 0. 3. b) For each edge in the original triangulation Δ.w(l) . we have |VI | 2 1 dim S 3 (ΔW ) ≥ 10+3|E I |−7|VI |+ v(l) . 3. Similarly the B-net coefficients c102 t = 2.w(l) .w(l) t c021 t+1 t+2 |VI | = |VI | + 3|T |. w(l) . .w(l) t c012 t+1 t+2 ( j + 2 − jei )+ . indicated w(l) . And by using Theorem 3. Proof: We choose the following domain points to con1 struct a determining set P for S 3 (ΔW ): a) Three B-net coefficients to determine all the B-net coefficients associated with the domain point in D1 (v) around each vertex v ∈ V in the original triangulation Δ. And then we have 1 c120 . 2.w(l) 2 3 1 = −2c201 v(l) . By using C 1 smoothness conditions (Lemma 2.v(l) 2 3 . t = 2. we have 1 2 1 c111 v(l) . with (β3 . c) For each triangle T (l) in the original triangulation Δ. iv) Noticing that v(l) = α3 w(l) + β3 w(l) + γ3 w(l) 1 1 2 3 . |E I | the number of edges in ΔW and ei the number of distinct slopes assumed by these edges.w(l) . 4). the (l) 1 set M is a MDS for S 3 (T W ). 2). w(l) . The related three domain points are marked with in Figure 3. 3.w(l) . we know dim S 3 (T W ) = 16.w(l) 2 3 + 4b. according to [12].1. Similarly the B-net coefficients by are zero.w(l) 2 3 + 2c. Noticing that |E I | = 9|T | + |E I |. w(l) .w(l) .1.w(l) 2 3 1 − c111 v(l) . t = 1. we have 1 dim S 3 (ΔW ) ≥ 10 + 6|T | + 3|E I | − 7|VI |. 0. and |VI | 2 By using Lemma 2. By using C 1 smoothness conditions across edge v(l) . (7) 245 . indicated by are zero. β1 .w(l) . are forced to zero by Lemma 2.w(l) 2 3 1 = −c120 w(l) . Theorem 4. indicated by are annihilated. |E| = 2|E I |−3|VI |+3. β2 .1 and item ii).w(l) 2 3 1 = −c111 w(l) . v(l) . 3.w(l) . −1.1.w(l) . by using C smoothness conditions across edge w(l) .w(l) . It is easy to see that the total number of B-net coefficients in the determining set P is 3|V| + |E| + 4|T |. iii) It is also noted that . 1 dim S 3 (ΔW ) = 3|V| + |E| + 4|T |. Similarly the B-net coefficients c120 t = 2. where |VI | denote the number of interior vertices in ΔW .w(l) 2 3 + 2d. γ3 ) = (−1. Then it follows from a) that all B-net coefficients in D1 (v) around every vertex v ∈ V must be zero. |T | = |E I |−|VI |+1.2. ( j + 2 − jei )+ = i=1 j=1 [(3 − ei )+ + (4 − 2ei )+ ] = 0. w(l) . This completes the proof of this theorem. γ2 ) = (−2.w(l) t+1 t+2 t Thus a = 0. each of them is the interior domain point located in the triangle which does not include any edge of Δ.w(l) . all the remaining B-net coefficients in each triangle are zero. v(l) . choose a domain point associated with that edge. (l) Therefore s ≡ 0. The related domain point is marked with in Figure 3. v(l) . (9) i=1 j=1 Thus c = 0.w(l) t c300 t+1 t+2 v) It follows from above that items .ii) It is noted that w(l) = α1 v(l) + β1 w(l) + γ1 v(l) 3 1 2 3 with (α1 . 3. This mean 1 that P is a determining set and then dim S 3 (ΔW ) ≤ |P|.w(l) 2 3 1 = 2a − c111 v(l) . Using the Euler Theorem 4 Main Results Theorem 4. then 1 dim S 3 (ΔW ) ≤ 3|V| + |E| + 4|T |. indicated by ♦ are annihilated. 2. The proof is completed. w(l) = α2 v(l) + β2 v(l) + γ2 w(l) 2 1 2 3 with (α2 . we have 1 3 1 c111 v(l) . Noticing that T W is a quasi-cross-cut (l) 1 partition. So s must vanish identically inside each subtriangle.w(l) . |V| = |E I |−2|VI |+3. Similarly the B-net coefficients t = 2. we have 2 3 1 1 c111 (8) Proof: Using the lower bound formula by Schumaker [11]. we choose four domain points. γ1 ) = (0.v(l) . α3 .1) across edge v(l) .w(l) . 1 We now set all B-net coefficients of s ∈ S 3 (ΔW ) associated with all domain points in P to zero.w(l) t t+1 t+2 Thus b = 0.

w(l) v(l) .320-327. v(l) . Firstly. Comput. Schumaker..): Multivariate Approximation Theory.L. Vol. 3). 2.L. [7] Lai. Theory Appl. “Dimensions of spline spaces over unconstricted triangulations”. pp. [11] Schumaker. . If ci is associated with a t t+1 t+2 t domain point lying in ξ111 t+1 t+2 (t = 1. Vol.. pp. pp. (1975). [6] Hong. Suppose ci is the B-net coefficient of Bi which is set to 1. Cambridge University Press. If ci is associated with a domain 1 point lying in ξ111 2 3 .. v(l) . M. Numer.. In: Schempp. (1992).. then the support set of Bi is the triangle v(l) . Vol.81-88.L. D. Vol. pp. (1990). [10] Morgan. J. Comput. Comput. |P|} obviously forms a dual basis of 1 S 3 (ΔW ).W. (1979). [13] Wang. a [12] Wang. Anal. (1996). We now analyze the support properties of the basis function Bi . [2] Alfeld. L. If ci is associated with a domain point 1 2 3 t lying in ξ111 t+1 t+2 (t = 1. for all ξi ∈ P. G. “The dimension of cubic spline space over stratified triangulation”. The first author is supported by the Natural Science Foundation in Guangxi (0575029) and Hu- 246 . Des. If ci is associated with a domain point lying in D1 (v) around a vertex v ∈ Δ. Aided Geom.3. ξ2 . R. L.w(l) .nan Key Laboratory for Computation and Simulation in Science and Engineering.v(l) . Acta Math. Sinica.. w(l) is the subtriangle t t t+1 ˜ t+1 t+1 ˜ t+1 sharing a common edge v(l) ..379-386.(eds.3. [8] Lai. .83128. Scott. P.... “On the dimension of space of piecewise polynamials in two variables”.. Approx. if v(l) . Denote P = ξ1 .H.16.56-75. . “A C 2 -quintic spline interpolation scheme on triangulation”.543-551. t t+1 t+1 Secondly. w(l) .w(l) Δ.9. pp. “The dimension of piecewise polynomial”. Vol. References [1] Alfeld. Aided Geom. then the support set of Bi is the triangle v(l) . t t t+1 t+1 t+1 Acknowledgements. Approx. w(l) .J. (1991). [5] Farin. v(l) . we consider two situations. M. (1996). (10) The set {Bi . v(l) . L. pp. “Spaces of bivariate spline functions over triangulations”. SIAM J. ξ|P| . Aided Geom.91-106. (1975). then the support set of Bi is the union of those subtriangles located in v(l) . and . Des. J. .w(l) sharing vertices v(l) . “The dimension of spline spaces of smoothness r for d ≥ 4r + 1”. w(l) .189-197.. Vol. (1986).J.24. v(l) with v(l) . where v(l) . then the support set of Bi is all triangles of Δ sharing v.27. w(l) . Exp.18.. “Instability in the dimension of spaces of bivariate piecewise polynomials of degree 2r and smoothness r”. w(l) . “An explicit basis for C 1 quartic bivariate spline”.891-911. Schumaker. Appl. v(l) . if v(l) . Schumaker. K. pp.13. v(l) is an interior edge of Δ. [3] Diener. Numer. L. H.. (1987). v(l) . It is clear that the set P constructed in Theorem 4. “The structural characterization and interpolation for multivariate splines”. Construct. Comput. Piper.1 is a 1 MDS for S 3 (ΔW ). Des. which is commonly called the dual basis corresponding to P. Res. Vol. R. w(l) and t t+1 t+1 v(l) .396-411.w(l) .199-208. G. Zeller. pp. The proof of the theorem is completed. B. [4] Farin. . (2006). v(l) . 2.. pp.. pp. . v(l) 1 2 3 w(l) . SIAM J. v(l) is a boundary edge of t t+1 v(l) .192. P. “Triangular Bernstein-B´ zier e patches”. Vol. then the t t+1 support set of Bi is the the union of v(l) . pp. W.J. Vol. v(l) . [9] Liu. i = 1. we define spline B j ∈ S 3 (ΔW ) to satisfy λξi B j = δi j . For each 1 ξ j ∈ P. D.L. Anal. manuscript. J. Vol.. Birkh¨ yser Verlag.. Spline Functions over Triangulations.7. Math. (1987). T. 3). (2007). 1 Figure 3: A MDS for S 3 (ΔW ) marked with . “Scattered data interpolation and approximation by using bivariate C 1 piecewise cubic polynomials”. Math.

At the same time. a robust Hausdorff distance was used in [4]. the applications of Hausdorff distances and genetic algorithms were researched in image shape matching as in [6]. a fast strategy is given as the rough measure of the similarity between the template and images and a novel partial Hausdorff distance is proposed to compute the shape similarity accurately. A) . the HD is defined as follows: H ( A . THE HAUSDORFF DISTANCE On the basis of reviewing the conventional and existing improved Hausdorff distances. A) . The distance between a point and a set is defined as the minimum distance from the point to all points of the set. China E-mail: xugang@ncepu. B) is defined as the maximum of h( A. and optimum matching search. which can adaptively regulate the probabilities of crossover and mutation. A ) = m a x m in b − a b∈ B a∈ A ⋅ . Based on the above-mentioned articles. h ( B . B ). a new genetic algorithm based on fuzzy logic. a directed modified Hausdorff distance (MHD) was introduced by Dubuisson [3]. is used to search the optimum shape matching quickly. B) and h(B. two different improved Hausdorff distances were introduced in [5]. we employ the Euclidean distance in this paper. A. The directed partial Hausdorff distance (PHD) is defined as follows: hl ( B. Wenxian Yang Department of Electrical and Electronic Engineering North China Electric Power University Beijing 102206. a new genetic algorithm based on fuzzy logic.2009 International Conference on Computer Engineering and Technology Fast Shape Matching Using a Hybrid Model Gang Xu.[9]. the conventional Hausdorff distances require high computational complexity and are not suited to the practical applications. genetic algorithm.1109/ICCET. and scaling of objects in image shape matching. Finally. accurate matching. a 2 . According to the partial Hausdorff distance.cn. The genetic algorithm (GA) [7] is such a search and optimization method. Keywords-shape matching.00 © 2009 IEEE DOI 10. B) identifies the largest distance from the point a ∈ A to B. INTRODUCTION The combination of Hausdorff distances and genetic algorithms can effectively detect rotating. The genetic algorithm has found kinds of applications successfully and has shown to be of great promising. ywxzgs81@163. A) = Lth∈B h(b. a p } and B = {b1 . which can adaptively regulate the probabilities of crossover and mutation. The directed distance h( A. b (4) . The experimental results show that the model achieves the shape matching with higher speed and precision compared with the traditional matching algorithms and can be used in real-time image matching and pattern recognition.2009. After edge extracting. B ) = m ax ( h ( A . II. Huttenlocher proposed the partial Hausdorff distance [2]. two new measures of shape similarity between the template and images are proposed to meet the real-time image matching. which is composed of three parts: rough matching. fuzzy control the image quickly among the given test images and the image has the greatest similarity to the template image. A )) . Hausdorff distance. bq } . H ( A. The Hausdorff distance (HD) [1] measures the mismatch of two sets and is more tolerant to perturbations in image matching for it measures proximity rather than exact superposition. b2 . the computational complexity is O ( p ⋅ q ) . Some modified Hausdorff distances have been proposed recently. I. a hybrid model for fast shape matching is introduced in this paper to search 978-0-7695-3521-0/09 $25.com Abstract—A hybrid model is proposed to finish image shape matching from coarse to fine.[6]. Conventional HD and Partial HD Given two finite point sets: A = {a1 .53 247 (1) h ( A . B ) = m a x m in a − b a∈ A b∈ B . . (2) h ( B . achieves the optimum shape matching with higher search speed and quality. which has developed to stimulate the mechanism of natural evolution and is powerful in finding the global or near global optimal solution of optimization problems. a fast strategy for rough matching and a new improved partial Hausdorff distance for accurate matching are presented as the measures of the degree of shape similarity between the template and images. The Hausdorff distance and genetic algorithm were applied to object detection in images [8].edu. . 1 ≤ l ≤ q . However. (3) is any norm on the points of A and B.

faster convergence and the global optimization. of which the algorithms based on fuzzy logic have better performances. t dimension. 1) A Novel Partial Hausdorff Distance for Accurate Measurement: Whether the conventional or partial Hausdorff distance. THE HYBRID MODEL FOR SHAPE MATCHING The hybrid model composed of these algorithms can finish shape matching through two parts: the rough matching and the accurate matching.1. which correspond with b j in A. A ) . Ranking h(bj . The flow diagram of rough matching. (5) Where t ( B) = m ⋅ ⎡cos(θ ) − sin(θ ) ⎤ ⋅ B . a novel partial HD is proposed as follows: For the point b j (1 ≤ j ≤ q ) in B. for each point of B. Pc and Pm are outputs. This algorithm has better performance and we use it to search the optimum. 248 . The genetic algorithm is widely used for the optimization problems including objective functions are discontinuous. two novel similarity measures are given on the basis of [5]. thus it is unnecessary to compute each distance. ⎢ sin(θ ) cos(θ ) ⎥ ⎣ ⎦ Therefore.2 and Fig. Equation t = (θ . III. m ) is the parameter of template rotation and scale. A) from small to large. b B.1. is searched to find the feature points. It’s known that the self-organizing genetic algorithm has higher robustness. h(bj . Mt0 add one. 0 ≤ l1 ≤ 1 is true. the key is to find the optimum parameter (θ . For higher matching speed and accuracy. Two Novel Measures of Shape Similarity Based on the Partial Hausdorff Distance For fast and effective image matching. The process of shape matching. The process of shape matching is shown as Fig. a new genetic algorithm based on fuzzy logic is used which can regulate the probabilities of crossover and mutation adaptively. In fact. A) = 1 − Mt 0 / q is called as the fast similarity measure between A and B. For the point b j in B. The optimum model is defined as follows: min h (t ( B ). if there are not points in the neighborhood. Mt0 is unchanged. the flow diagrams of rough matching and accurate matching are shown in Fig. It’s better to adopt the genetic algorithm to search the optimum in this model. In view of the shortcomings of simple genetic algorithm (SGA). m ) to get the best matching. the lth value is h2 ( B. the distances between every point of the template B and the matched image A are computed. this distance has higher accuracy with the same speed. A) which is used for accurate similarity measure. such as premature convergence and slow convergence. image rotation (rotation coefficient θ ) and scale (scale coefficient m) are considered. 3) The Optimum Model for Shape Matching: Quickly retrieving the given test images by rough and accurate matching. is searched to find the feature points and the directed Hausdorff distance h(bj . the differences of the population average fitness and standard deviation between two adjacent generations are inputs. If there are points. Compared with [6]. THE GENETIC ALGORITHM WITH PROBABILITIES OF CROSSOVER AND MUTATION SELF-ADAPTION BASED ON FUZZY CONTROL The convergence speed and solution quality are directly influenced by the probabilities of crossover Pc and mutation Pm.[6].2. Fig.3. in the matched image only one point or several points in its neighborhood have the smaller distance with the point. the small neighborhood M1(m1×m1). 2) A Fast Strategy for Rough Measurement: Initialize the counter Mt0=0. A new genetic algorithm is proposed in [10] and adaptively regulates Pc and Pm by inquiring the table based on fuzzy control. the small neighborhood M2 (m2×m2). if there are not points. For A and B are the sets of the edge points of the matched images and the template respectively. A) from b j to A is the smallest distance from b j to these points. Two selfadaption normalized operators are given. A) is a given maximum. A) . IV. Normalized value h1 ( B . high Fig. There have been some Pc and Pm self-adaption algorithms. which correspond with b j in A. this computational cost limits Hausdorff distance’s practical applications. Two images are similar only when Mt 0 / q ≥ l1 × q .Where Lth∈B denotes the Lth ranked value of h(b.

4) Optimum Results Search: The genetic algorithm is adopted. 2) Definition of Objective Functions: The functions for rough and accurate matching are minimized and defined as follows: f1 (θ . 1 2 TEST IMAGES 3 4 Fig. M1 and M2 are much less than p). the initial value of Pc and Pm etc. The template is the same as NO. The Implementation Steps 1) Edge Extracting: Canny operator is employed. NO. Generally M1 is a little bigger than M2.V.[9]. 249 . the largest iterative number NM. 5 6 A. the methods proposed in this paper lessen the computational cost to a great degree.0 in the Platform of Intel (R) Core (TM)2 Duo CPU (2. and comparison of the fast strategy and Hausdorff distance in this paper are showed in table 1 compared with the Hausdorff distance in [6].3. A) = 1− Mt0/ q f1 ∈ (0. NO. 245. Algorithms reference [9] reference [6] PHD this paper the fast strategy (in this paper) HD COMPUTATIONAL EFFICIENCY ANALYSIS +/6q×p 3q 3q×M2 q ×/÷ 6q×p 3q 3q×M2 1 Comparison 2q×p M2+q M2×q+q M1×q 13 14 15 16 17 From table1.1) . B. NO.17 is the image after doubling the size of the template and rotating it 45 degree. TABLE II. m) = h2 (t ( B ). (7) Where t ( B) = m ⋅ ⎡cos(θ ) − sin(θ )⎤ ⋅ B . The flow diagram of accurate matching. m) = h1 (t (B). and 315 degree counter-clock-wise respectively.00GHz). 167.16 is the image after narrowing the size of the template one times and rotating it 90 degree. RESULTS AND ANALYSIS The algorithms are implemented by Matlab7. The test images are showed in table 2.2 image. NO. multiply and divide (×/÷). 7 8 9 (6) 10 11 12 f 2 (θ . Computational Efficiency Analysis The computational times of addition and subtraction (+/). TABLEⅠ. 90. (The worst situation is considered in the partial Hausdorff distance and fast strategy. ⎢ sin(θ ) cos(θ ) ⎥ ⎣ ⎦ 3) Parameters and Ranges: Population size N.11-15 are images after rotating the template 15. A) .1-10 are fishes with kinds of shapes (128×128).

85 0. 2. 0. The lower population average fitness means that the average distance between individuals of each generation and the template is smaller and the search quality is better in the process of searching.22 The changes of crossover probability.05 respectively.6.4. the Hausdorff distance and fast strategy proposed in this paper are compared with which as in [6]. The selection. 0. NM is 100.18 0.9839) (14. the HD given in this paper has higher matching accuracy as well as faster speed and the fast strategy has lower computational complexity. the changes of population average fitness.5.7.1 0 10 20 30 40 50 60 70 80 90 100 Generation Fig. and the initial value of Pc and Pm is 0. and mutation operator are adopted as in [11]. the coding length of θ and m is 10 and 5 respectively with binary coding.Standard deviation Setting parameters: N is 20.24 0.6 and 0.75 0.65 0 10 20 30 40 50 60 70 80 90 100 0. The advantages become more obvious when the images are larger. 0.7 0. Algorithms reference [9] reference [6] PHD this paper the fast strategy (in this paper) HD Running time 27min 39s 2min 58s 3min 1s 2min 50s ( θ ,m) (15. 1 0. shown in table 3. Pc. 0. Generation The changes of mutation probability. A. 1.95 2 1.5 Crossover probability From table3. and Fig.25 SGA GA 0. and Pm of GA given in this paper are shown in Fig.8 0.04 0 10 20 30 40 50 60 70 80 90 100 Generation Fig. the optimized function has lower average fitness and higher standard deviation in GA. Fig. B. Mutation probability SGA GA 0.9 0.5 0 10 20 30 40 50 60 70 80 90 100 Generation Fig.4839. Crossover Probability Pc.08 0.2 0.0323) Optimal results From Fig. The Changes of Population Average Fitness. Noting that the running time is relative because it is influenced by kinds of external factors such as the computer running time. The changes of standard deviation.7 respectively compared with SGA.4. Fig.4 and Fig. and Mutation Probability Pm Take the rough matching between the template and NO.9839) (15.5. Adopting the same genetic algorithm (proposed in this paper). The higher standard deviation shows that the individuals are more dispersive.11 for example.1 0.14 0.11 for example. 250 .7801. Each algorithm has 10 operations to overcome the randomness in GA.2 0.12 0.16 0. COMPARISON BETWEEN SEVERAL HAUSDORFF DISTANCES 0. which are very helpful to wider search.3109. crossover.15 0. Standard Deviation.6.[9].1320.06 SGA GA Average fitness 0. TABLE III. Fig. Comparison Between Several Hausdorff Distances Take the matching between the template and NO.5.5 SGA GA 1 0. The changes of population average fitness. standard deviation.9839) (14.

and Su Jianfeng. A modified Hausdorff distance for object matching. TABLE IV.7801 90. Liu Jianzhuang. The parameters of SGA: ( θ .0081 0.. other images are kinds of affine transformations of the template and are also selected. and R.0002 0 0 0 0. 2000.0323). A new adaptive algorithm for regulating the probabilities of crossover and mutation.0323 0. An Improved Algorithm for 2D Shape Matching Based on Hausdorff Distance. Bristol: Adam Hilger. The parameters show that the matching has low error and high quality. and Gu Jianjun. Comparing images using the Hausdorff distance undertranslation. On the basis of rough matching. Image Pattern Recognition——VC++ Implementation. m) = (17. m) = (14. 1975. 915-919.5726 0.0005 0 0 0 0 0 0 0 Matched Y N Y Y Y Y Y Y Y [7] [8] [9] [10] From table 4.5 1. Control and Decision. G.0202 1. Zhang Wenjing. Israel.5074 246. and stronger self-adaptive ability. 20 (6). D.H.0002 0. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 n 87 71 90 87 98 92 96 90 97 84 51 93 83 61 40 77 76 RESULTS OF ROUGH MATCHING m 0.6 and Fig.7918 167. 5 (2).K. on 15–18 June.6657 14. . Shen Tingzhi.1437 45. The University of Michigan Press. D. eight images of transformations of the template are gotten by accurate matching as shown in table 5. Acta Electronica Sinica. 425429.From Fig.2317 349.G. Results and Analysis of Rough and Accurate Matching The results of rough and accurate matching are shown in tables 4 and 5 (n denotes the optimal generation. CONCLUSIONS θ (degree) 45. whereas Pc and Pm in SGA remain in the process of searching.P. M. CVPR ’92.4663 12. W.9766 46.3109. 2008. Li Qing.7. 654–656. GA: ( θ . 1. General Topology.0002 0. Dubuisson and A. C. Zhang Zhijia. 23 (1). pp. the average optimum iterative times of SGA and GA are 74 and 51 respectively.9597 0.5513 200.8886 87.H.3109 91.9758 The fast measures 0. Tsinghua University Press. Beijing Jiaotong University Press.0001 0..9839).Conf.5865 253. REFERENCES [1] [2] [3] [4] [5] [6] TABLE V. NO. 1994.6598 91. and Wang Zhiliang.1437 168. 106-109. 1996. and Shi Zelin.J.9839 0.8145 0.0323 1.8046 The hybrid model produced in this paper has features as shown below: • Quickly and effectively finishing the shape matching from coarse-to-fine with the fast strategy and improved HD. pp. Y denotes the image is exactly matched with the template). ICPR.8871 0.. Jain.. 3. Zhang Wei. 1~6. Journal of Beijing Institute of Technology.9839 0.5 1. Yang Shuying.8387 0.6935 0. • The genetic algorithm with the probabilities of crossover and mutation self-adaption based on fuzzy control is adopted to get higher speed.9839 1.0323 0. Jerusalem. Holland J.0002 0. Rucklidge. December 1999.0446 m 0. 11. Park. 0..478 21. NO. 566-568. pp. Adaption in Natural and Artificial Systems.0003 0. pp. 12.0001 0 0 0. Huang Shabai. 2003.0002 0 0. and Klanderman.5719 2. Ann Arbor.5 0. In: Proc. The image 3 is selected which is similar to the template. Kwon. Sim.9113 1. compared with other similar matching methods. pp. Its stability is not that great yet and will be studied next.0909 217. better quality. Soc. 79-83.9150 247. 2000.. 1978. 15. which show that the GA given in this paper has a strong self-adaptive ability.9003 14. Hausdorff Distance Based Object Matching with Genetic Algorithms. Object matching algorithms using robust Hausdorff distance measures. pp.0001 Selected × √ √ × × × × × × × √ √ √ √ √ √ √ match with the template exactly is removed.2. The matching results fit with human vision. Ma Kun.9795 203.A.5103 315. 2 3 11 12 13 14 15 16 17 n 62 95 58 67 66 54 79 49 86 RESULTS OF ACCURATE MATCHING θ (degree) 1. √ denotes the image is selected.8992 0.9839 1. 14.3167 200.5 0. this model can be used for image recognition and matching in practice. Zang Tiefei.7625 181.0002 0. A Fast Strategy for Image Matching Using Hausdorff Distance.. Gao Xinbo. IEEE Transactions on Image procession. Xu Xiaoming. pp.0088. In conclusion.. 733-737. Yin Yixin. and 17 are selected after rough matching. Huttenlocher. M I. Proceedings of the IEEE Intemational Conference on Robotics.0381 317. Vision Pattern Recogn.0002 0. NO. 16. 2005. Journal of Image and Graphics. The Application of Improved Hausdorff Distance and Genetic Algorithm in Image Matching. VI. Comput. the changes of Pc and Pm in GA are frequent with the changes of average fitness and standard deviation. 24 (4). Xie Weixin.P. 13.5 0.0565 1.K. Intelligent Systems and Signal Processing. The image 3 which does not 251 [11] Csaszar A. O. 1992 IEEE Comput.9718 0. Chen Jianjun.9879 HD 0 0. pp.

2009. haleh. X is the parameter space. INTRODUCTION III. a genetic algorithm approach is proposed for solving multi-objective cell formation problem.2009 International Conference on Computer Engineering and Technology A Multi-objective Genetic Algorithm for optimization of Cellular manufacturing system H. In our research. such as grouping efficiency proposed by Chandrasekharan and Rajagopalan [6]. Then. Genetic Algorithm.ac. the most fundamental objectives for the CFP are Minimization of intercell flows and cell load variation. Cell load variation: 978-0-7695-3521-0/09 $25. This hybrid method presents the large set of non-dominance solutions for decision makers to making best solution.1109/ICCET. haleh24@hotmail. In this paper. Given the size of the cellular manufacturing research output. which is the main aim of a typical multi-objective optimization process. and Y is the objective space. [3] and Dimopoulos [4]. hiranmanesh@ut. Hatefi Department of Industrial Engineering.com. authors used a SPEA-II method as well known and efficient standard evolutionary multi-objective optimization technique. it is surprising that relatively few solution methodologies explicitly address the multi-objective version of the problem. H. Several measures have been used to evaluate the objective function of the CFP. Formally [5]: Where x is called the decision vector. These vectors are known as Pareto optimal. Mathematically. the concept of Pareto optimality is as follows: Assume. DEFINITIONS A general multi-objective optimization problem can be described as a vector function that maps a tuple of m parameters (decision variables) to a tuple of n objectives. kor. We can benefit from lower parts transfer cost due to the minimization of intercell flows and higher within-cell machine utilization due to the minimization of cell load variation. group efficacy by Kumar and Chandrasekharan [7] and bond energy by Miltenburg and Zhang [8]. as indicated in the reviews by Mansouri et al. Multi-Objective. Iran hkor@ut. The mathematical model is given as [9]: . or their combinations.ir. H. y is the objective vector. a is said to dominate b (also written as a>b) if I. Iranmanesh. College of Engineering University of Tehran Tehran. production requirements and available time on machine in a given period. However. without loss of generality.00 © 2009 IEEE DOI 10. we propose to minimize intercell flows and cell load variation in a consideration of the processing time and available time on machine in a given period. Recent surveys [1.com Abstract—Cellular manufacturing (CM) is an important application of group technology (GT) in which families of parts is produced in manufacturing cells or in a group of various machines.ir. SPEA II. M. This technique does not provide the system designer with a set of alternative trade-off solutions. Pareto set.212 252 According to the literature. The objectives are the minimization of both total moves (intercell as well as intracell moves) and the cell load Variation. The set of solutions of a multi-objective optimization problem consists of all decision vectors for which the corresponding objective vectors cannot be improved in any dimension without degradation in another. Keywords— Cellular Manufacturing System. In this paper.ac. s_m_hatefi@yahoo. etc. II.2] indicate that the practical implementation of a cellular manufacturing system involves the optimization of many conflicting objectives. and S. a maximization problem and consider two decision vectors . these measures are only suitable for those CFPs whose machine– part flow chart is a 0–1 matrix denoting the manufacturing relationship between machines and parts. The efficiency of multi-objective GA-SPEA II is illustrated on a large-sized test problem taken from the literature. MATHEMATICAL MODEL The Cellular manufacturing (CM) is the application of group technology (GT) in manufacturing systems. without considering other important design factors such as processing time. optimization is normally achieved through the aggregation of the objectives into a single composite objective. Even when multiple objectives are considered.

3 respectively as suggested by Dimopoulos [10] for comparison purposes). m the total number of machines the processing time (hour/piece) of part j on machine i the available time on machine i in a given period of time the production requirement of part j in a given period of time [ ] is an m p machine-part incidence matrix. if machine is in cell l and 0 where otherwise. where According to the above model. (2) it uses a nearest neighbor density estimation technique which guides the search more efficiently. The total number of intracell moves performed by part i in order to complete its processing requirements. M is a c X p matrix of average cell load. (external set) 253 .–‘–ƒŽ ‘˜‡• Where: The cell number in which operation k is performed on part i taking into consideration the sequence of operations. In fact. we solve the problem by GA-SPEA II method and present set of non-dominated solutions. SPEA II has three main differences with respect to its predecessor [14]: (1) it incorporates a fine-grained fitness assignment strategy which takes into account for each individual the number of individuals that dominate it and the number of individuals to which it dominates. SPEA uses an external archive containing nondominated solutions previously found (the so-called external nondominated set). since the external nondominated set participates in the selection process of SPEA. instead of using niches based on distance. the fitness of each member of the current population is computed according to the strengths of all external nondominated solutions that dominate it. thus slowing down the search. At each generation. we give a brief summary of the algorithm here. This strength is similar to the ranking value of MOGA [12]. taking into consideration the sequence of operations. The cell number in which operation (k+1) is performed on part i taking into Consideration the sequence of operations. Thus. The result would be present in section 5.7 and 0. For a more detailed description the interested reader is referred to Zitzler [5]. IV. Because of this. Pareto dominance is used to ensure that the solutions are properly distributed along the Pareto front. SPEA II is also a revised version of SPEA whose pseudo code is shown in Algorithm 2 [14]. The approach adopted for this sake was a clustering technique called average linkage method [13]. its effectiveness relies on the size of the external nondominated set. The number of parts. Although this approach does not require a niche radius. The fractions representing the weights attributed to the intercell and intracell moves respectively ( and are assumed to be 0. the authors decided to adopt a technique that prunes the contents of the external nondominated set so that its size remains below a certain threshold. The fitness assignment process of SPEA considers both closeness to the true Pareto front and even distribution of solutions at the same time. it might reduce the selection pressure. a strength value is computed. since it is proportional to the number of solutions to which a certain individual dominates. This approach was conceived as a way of integrating different MOEAs (Multi Objective Evolutionary Algorithms). the total number of operations to be performed on pan i to complete its processing requirements c The number of cells. where is workload on machine i induced by part j and is equal to X [ is an m X c cell membership matrix. nondominated individuals are copied to the external nondominated set. In SPEA. The overall algorithm is as follows: Algorithm 1 (SPEA2 Main Loop) Input: N (population size) (archive size) T (maximum number of generations) Output: A (nondominated set) Step 1: Initialization: Generate an initial population P0 and create the empty archive = Set t = 0. if its size grows too large. SPEA II TECHNIQUE As SPEA (Strength Pareto Evolutionary Algorithm) [5] forms the basis for SPEA2. For each individual in this external set. and (3) it has an enhanced archive truncation method that guarantees the preservation of boundary solutions.

Gupta et al. If the end condition is satisfied. In the other solutions that get by two algorithms. (iii) With a preset mutation probability.The authors purposely use this test problem in order to assess the validity of their approach in a large-sized test problem. values of individuals in Environmental selection: Copy all nondominated individuals in and to If size of exceeds N then reduce  by means of the truncation operator. crossover operation will perform on the selected parents and to form new offsprings (children). Increment generation counter (t = t + 1) and go to Step 2. authors use a large scale of test problem that taken from literature [9] . By comparing these solutions. we perceive solution number 36 in this paper can dominate the solution numbers 23. Termination: If t T or another stopping criterion is satisfied then set A to the set of decision vectors represented by the nondominated individuals in  . by this study authors want to show the efficiency of this algorithm in CMS environment. [9] In addition. Step3 Create a new population by iterating loop highlighted in the following steps until the new population is complete (i) Select two parent chromosomes from a population according to their fitness from step2. V. GENETIC ALGORITHM Step 4 Step 5 at each gene. RESULTS & DISCUSSION The general outline of GA is summarized below [11]: Algorithm 1: Genetic algorithm Step 1 Generate random population with n chromosomes by using symbolic representation scheme (suitable size of solutions for the problem). The set of solutions provided by multi-objective GP-SPEA II provides a good starting point for other decision-maker activities that would lead to an informed decision.…. the set of non-dominated solutions produced by multi-objective GP-SPEA II provides the decision maker with a reasonably complete picture of the potential design trade-offs. In fact. As it can be seen in 254 . Here. So. Stop. offsprings are the exact copy of parents. Variation: Apply recombination and mutation operators to the mating pool and set to the resulting population. compared with Dimopoulos’s solution. Deliver the best solution in the current population. otherwise if size of  is less than N then fill  with dominated individuals in and . (ii) With a preset crossover probability. Also. Those chromosomes with the better fitness will have chosen. Mating selection: Perform binary tournament selection with replacement on  in order to fill the mating pool. which is the aim of a natural multi-objective optimization Process. NUMERICAL EXAMPLE In order to demonstrate efficiency of our model. authors get 36 solutions. Multi-point crossover is used while partially matched crossover is employed for Problem. the solution is near to each other. Chosen genes are swapped to perform mutation process. VIII. 43 that get by Dimopoulos’algorithm. since multi-objective cell-formation problems of this type do not exist in the literature. As already indicated in table I. the authors provided a corresponding set of non-dominated solutions for comparison purposes. If no crossover was performed. mutation will perform on new offspring Multi-objective GP-SPE II is an algorithm for automatically producing sets of non-dominated solutions for multi-objective CFP. this set of non dominated solution also. a Cell Formation Problem test problem [9] solved by GP-SLCA algorithm. stop. (iv) Place new offsprings in the new population. CONCLUSION This research aim to implementation of SPEA II algorithm as a state of art method in multi objective problem. authors in different models and objective used this test problem. He gets the set of non dominated solution that consist of 43 solutions. The set of evolved solutions covers the entire trade-off range of the objectives considered. VII. On the other hand. in this paper. This is because it’s using of SPEA II methodology that solved the multi objective problem and produce the set of non dominated solution. Step2 Evaluate the fitness function of each chromosome x in the population by using the proposed objective functions. Go to step 2 VI. In the research of Dimopoulos.Step 2: Step 3: Step 4: Step 5: Step 6: Fitness assignment: Calculate fitness and .

6 94 94 98.303 1.016 1.989 0.2 70.228 1.2 132.5 82. OBJECTIVE FUNCTION VALUE FOR THE SET OF NON-DOMINATED SOLUTION AND COMPARISON WITH DIMOPOULOS [10] RESULT Our GA-SPEA II DIMOPOULOS GPSLCA Cell–load variation 2.173 1.2 84.763 0.378 0.224 Cell–load variation 94.3 0.565 1.822 1.346 1.7 186.185 0.741 0.509 1.665 1.8 76.383 1.3 79.417 0.932 0.31 0.482 0.822 1.5 35.885 25.1 67.3 85.2 60.6 131.738 0.8 40.635 0.8 85 89.8 170.313 1.364 0.315 1.158 1.8 200.739 0.9 44.6 34.4 DIMOPOULOS GPSLCA Total part moves 0.008 1.078 1. Total part moves 1.565 1.9 Our GA-SPEA II Solution No.8 28.115 1.4 76.7 230.105 1.28 1.5 148.8 180 183.093 0.9 53.661 1.8 34.9 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 255 .4 76.5 49.5 122.7 47.315 1.157 1.7 43. TABLE I.8 104.9 174.982 0.6 68 69.997 0.128 0.2 113.6 113.7 51.842 0.2 217.168 0 Cell–load variation 83.472 1.456 1.016 1.8 111.6 38.1 83.2 35.4 42 44.3 122.6 31.28 0. Authors suggest to researcher to study about NSGA II and another efficient algorithm to comparing and presenting the best algorithm for this type of problems.7 100.2 Solution No.634 1.003 0.7 32.989 0.2 162 170.769 1.845 0.1 33.previous section.507 1.2 63.902 0.051 0 238.476 1.558 1.6 69.892 0.7 0.9 123.1 208.5 108.634 1.159 1.3 58.613 1.8 55.621 0.149 Total part moves 25.8 28.312 1.136 1.3 84.373 1.1 38 38.8 156.362 0.4 45. authors compare this algorithm with GPSLCA that presented by Dimopoulos [10].009 0. Cell–load variation 2.2 70.661 1.665 1.4 65 66.472 1.6 45.008 1.9 89.843 0.14 0.48 1.

J. EUROGEN 2001. C. Cellular manufacturing at 46 user plants: implementation experiences and performance improvements.P. 38..P. 28. Int. IEEE Trans. 27. Mansouri. and Rajagopalan. Athens.R. J. 481–507.J. 1996. Prod. Manage.M. Prod. Res. and Thiele.. 3. M. Genetic Algorithms for Multiobjective Optimization: Formulation. A review of the modern approaches to multi-criteria cell design. Prod. Evolutionary Methods for Design... 257–271... SPEA2: Improving the Strength Pareto Evolutionary Algorithm... 38. 1990. Int. J. J. U. 1980. John Wiley. 1201–1218. University of Illinois at UrbanaChampaign. J. Prod.. S. N. Giannakoglou.S. 1993.. Wemmerlov. Thiele. A comparative evaluation of nine wellknown algorithms for solving the cell formation problem in group technology.. 10. E. Res. Dimopoulos. 4119–4133. [2] [3] [10] [11] [12] [4] [5] [13] [6] [14] [7] [8] 256 . 1998. 1999. Int.P.E.. M. Groupability: analysis of the properties of binary data matrices for group technology. 2000. pages 95–100. Reducing the size of the nondominated set: Pruning by clustering. editors. and L. C. Zitzler. Chandrasekharan. second edition.. In S. Forrest. Res. Dimopoulos. and Johnson. Multi-objective optimization of manufacturing cell design.. 4855–4875. Papailou. Int. D. California. and T. Morse. D. Grouping efficacy: a quantitative criterion for goodness of block diagonal forms of binary matrices in group technology. Zitzler. Optimization and Control with Applications to Industrial Problems. U. Fleming. D. S. and Chandrasekharan.S. 2006. W. Greece. J. Res. 1035–1052. 44–72. C. Res. 29–49. E. Discussion and Generalization. M. Miltenburg. Prod. Practical genetic algorithm.. editor. and Johnson. Empirical findings in manufacturing cell design. and Zhang. P.TABLE II. Fogarty. and Mort. 2001. Prod. J. Computers and Operations Research. M. Fonseca and P.. Moattar Husseini. 233–243. R. 2004. 447–482.. San Mateo. 44(22). Solution 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 W1 C1 C2 C3 C1 C3 C3 C4 C1 C1 C4 C4 C2 C2 C1 C6 C5 C5 C2 C2 C1 C5 C1 C2 C2 C2 C1 C2 C1 C7 C7 C1 C1 C1 C1 C1 C2 W2 C1 C1 C3 C2 C1 C1 C3 C2 C2 C1 C1 C1 C1 C2 C1 C1 C1 C1 C1 C2 C1 C2 C1 C1 C1 C2 C1 C2 C1 C1 C2 C2 C2 C2 C2 C1 W3 C1 C2 C3 C3 C3 C3 C4 C3 C4 C4 C2 C3 C3 C3 C3 C5 C2 C3 C3 C3 C4 C3 C3 C3 C3 C3 C3 C3 C5 C5 C3 C3 C3 C3 C3 C3 W4 C1 C2 C3 C3 C3 C3 C4 C4 C4 C4 C4 C4 C5 C5 C6 C5 C5 C4 C4 C6 C7 C6 C4 C4 C4 C4 C4 C4 C3 C3 C4 C4 C4 C4 C4 C4 MACHINE-CELL CONFIGURATION FOR THE SOLUTION IN TABLE I W5 C1 C2 C3 C3 C3 C3 C4 C4 C4 C4 C4 C5 C5 C5 C6 C5 C5 C5 C6 C6 C5 C6 C5 C5 C5 C7 C7 C7 C7 C7 C5 C8 C8 C8 C5 C5 W6 C1 C2 C3 C3 C3 C3 C4 C4 C4 C4 C4 C5 C4 C5 C6 C4 C3 C6 C6 C6 C5 C4 C6 C7 C7 C7 C6 C7 C6 C6 C8 C7 C8 C6 C7 C8 W7 C1 C2 C3 C3 C3 C3 C4 C4 C4 C4 C4 C5 C5 C5 C6 C5 C5 C6 C6 C6 C5 C6 C7 C7 C7 C7 C7 C7 C7 C7 C8 C8 C8 C8 C8 C9 W8 C1 C2 C3 C3 C3 C2 C4 C4 C4 C3 C3 C5 C5 C4 C5 C3 C4 C6 C5 C4 C3 C5 C7 C7 C6 C6 C5 C5 C4 C4 C7 C6 C6 C5 C7 C7 W9 C1 C2 C3 C3 C3 C3 C4 C4 C4 C4 C4 C5 C5 C5 C6 C5 C5 C6 C6 C6 C5 C6 C7 C7 C7 C7 C7 C7 C7 C7 C8 C8 C8 C8 C8 C9 W10 C1 C2 C3 C3 C3 C3 C4 C4 C4 C4 C4 C5 C5 C5 C6 C5 C5 C6 C6 C6 C5 C6 C7 C7 C7 C7 C7 C7 C7 C7 C8 C8 C8 C8 C8 C9 W11 C1 C2 C2 C3 C2 C3 C2 C4 C3 C2 C4 C5 C5 C5 C4 C2 C5 C6 C6 C5 C2 C6 C7 C6 C7 C5 C7 C7 C2 C2 C6 C5 C5 C8 C6 C6 W12 C1 C2 C3 C3 C3 C3 C4 C4 C4 C4 C4 C5 C5 C5 C6 C5 C5 C6 C6 C6 C5 C6 C7 C7 C7 C7 C7 C7 C7 C7 C8 C8 C8 C8 C8 C9 W13 C1 C2 C3 C3 C3 C3 C4 C4 C4 C4 C4 C5 C5 C5 C6 C5 C5 C6 C6 C6 C5 C6 C7 C7 C7 C7 C7 C6 C7 C7 C8 C8 C7 C7 C8 C9 W14 C1 C2 C1 C3 C3 C3 C1 C4 C4 C4 C4 C5 C5 C5 C6 C5 C5 C6 C6 C6 C5 C6 C7 C7 C7 C7 C7 C7 C7 C7 C8 C8 C8 C8 C8 C9 W15 C1 C2 C3 C3 C3 C3 C4 C4 C4 C4 C4 C5 C5 C5 C6 C5 C5 C6 C6 C6 C5 C6 C7 C7 C7 C7 C7 C7 C7 C7 C8 C8 C8 C8 C8 C9 REFERENCES [1] Wemmerlov. 35.L. A genetic algorithm-based approach to cell composition and layout design problems. C. Evol. Tsahalis. Int. J. Gupta. Oper. 1989. Y. A.. Comput. Morgan Kaufmann Publishers. 2000.. Haupt. Kumar. Kumar. Res. Proceedings of the Fifth International Conference on Genetic Algorithms. Prod... J. Laumanns.C. and Sundaram. J. 42. Int. 7(1–2):55–66. S. [9] Gupta. 1991. Periaux... Prod. and Newman.A.T. Int. Res. Evolving knowledge for the solution of clustering problems in cellular manufacturing. J. 1997. J. Res. pages 416–423. C. 34. Multiobjective evolutionary algorithms: a comparative case study and the Strength Pareto approach. M..J. L. Haupt. J. Int. In K.

1109/ICCET. Definition 1 Weiser’s slicing problem is stated as follows: For a given program point point and a set of variables V in program P. based on sets. aspect-oriented progran slicing. very little work is involved in its formalization [7]. V> is called slicing criterion. debugging. where Q is called by slice. conditioned program slicing. testing.e. we provide not only a precise semantic basis for program slicing but also a sound mechanism for reasoning and verification about program slicing. dicing. while the latter due to Ottenstein K. Many textbooks on Z are now available. and make a new program (called by slice). transforming. It is used to extract statements and predicates from original program P. Till now. J. originally introduced by Weiser M. the former due to Weiser M. is an effective technique for narrowing the focus of attention to the relevant parts of a program during the debugging process [1].2009. program slicing has been widely used in program analyzing. understanding.. Nowadays. intraprocedural slicing [3] & interprocedural slicing [4]. 330013. Although program slicing has been widely studied in literatures. The specification language Z includes both a means of specifying data types. The teaching of Z has become of increasing interest. relevant slicing. To help alleviate this situation. and a means of specifying constrains using predicate logic. and how to deal with program slicing algorithms.cn Abstract This paper represents a research effort towards the formal mapping between program slicing and Z specifications. and <point. but also for industrial development of high-integrity systems such as safetycritical software.. static slicing & dynamic slicing. where <p. V> is called slicing criterion. object-oriented program slicing. measuring. obtained lots of theoretical results. i. reverse and reengineering. and developed lots of practical applications [2]. definitions of program slicing. Z has been used for a number of digital systems in a variety of ways to improve the specifications of computer-based systems. It is thus attractive to consider it used in formalization of program slicing. which might influence the variables of V at a special program point p. Thus we can analyze the original programs P by the slice. General aspects of program slicing are considered. the sequences of states that arise at point point in the two programs have identical values for all variables in V. hybrid slicing. amorphous slicing. A formal mapping between program slicing and Z specifications The most related concepts and definitions of program slicing are known from [1] and [3]. maintenance. sequence program slicing & current program slicing. nodes and edges of program dependence graphs. this paper represents a research effort towards the semantic formalization for program slicing technique using Z schema calculus [8-10]. researchers have proposed forward slicing & backward slicing. Many researchers have done much in program slicing. 978-0-7695-3521-0/09 $25.6] and so on. 1. etc. specification slicing [5. here referred to as Weiser’s slicing and Ottenstein’s. model checking. optimizing.2009 International Conference on Computer Engineering and Technology A Formal Mapping between Program Slicing and Z Specifications Fangjun Wu School of Information Technology Jiangxi University of Finance and Economics NanChang. development and applications of program slicing technique have been carried out for more than twenty years. and program slicing algorithms.00 © 2009 IEEE DOI 10. The Z notation is used recently not only in academic domain. reusing. With this approach.. and software security. Introduction Program slicing. . The reasons for choosing the Z notation are as follows.edu. denotational slicing. Major difficulties in formalizing program slicing lie in how to describe various definitions. union slicing. respectively.122 257 2. chopping. The research. find a projection Q of P by deleting zero or more statements that when P and Q run on the same initial state. China wufangjun@jxufe.

a Boolean type Affect::=Yes | No is introduced to decide whether the variable is affected or not. Henceforth. data dependence graph (DDG) and control flow graph (CFG). architectural dependence graphs. we know that Weiser’s slice is executable. program dependence graphs imply Horwitz’s. Therefore. Thus two free types states of program States and values of variables Value are introduced: [Value. unless explicitly stated. Several formalisms have been used to represent program dependences. States]. For the Ottenstein’s slicing problem. No means not. In following section. Node_Shapes::=circle | ellipse The node will also have a type. UML class diagram dependence graphs and specification dependence graphs (SpDG). a program slice is a subset of the statements and control predicates of the program P that directly or indirectly influence the variables of V at a special program point point. To slice programs. free types Statement and Variable are introduced: [Statement. in which Yes means that the variable is affected. we will outline the features of program dependence graphs. unless explicitly stated. they share some common features. dependence edge also has shape. Line. namely control dependence edges. Among them. From Definition 1 and Definition 2. V> is called slicing criterion. extended program dependence graphs. A node has a shape. Generally. Definition 4 Horwitz’s program dependence graph is a modification of Ottenstein’s. Two typical definitions of program dependence graphs are Ottenstein’s definition[3] and Horwitz’s[4]. if original program OriginalProgram and slice SliceProgram have the same initial state. Node_Shapes will normally be an enumerated type which holds the possible shapes that nodes can have on a given graph type. 258 . loop independent edges and loop carried edges. Definition 3 Ottenstein’s program dependence graph is composed of control dependence graph (CDG). given slicing criterion <point. The Ottenstein’s slicing problem can be described using schema OttensteinProgramSlicing. Free types are introduced to describe nodes and edges respectively: [Node. deforder dependence edges . Node_Types::=entry | assignment | controlpredicate | InitialState | FinialUse | others Many of the characteristics of dependence edges are similar to those of a node. value dependence graphs. Variable]. system dependence graphs (SDG). while Ottenstein’s slice does not necessarily constitute an executable program. and their possible shapes can be described by the enumerated set Arc_Shapes. program dependence graph is the basic.Definition 2 Ottenstein’s slicing problem is stated as follows: For a given point point and a set if variables V in program P. to assure that slices contain all of statements that might affect values for all variables in VariableSet. Name. an original program OriginalProgram must exist. Arc]. Similarly. VariableSet>. Although different definitions of program dependence representations have been given. while others are extended on it by adding some specifical characteristics. Weiser’s slicing problem can be described using schema WeiserProgramSlicing. and it is a directed graph whose nodes are connected by several kinds of edges. For Weiser’s slicing problem. a program slice implies Ottenstein’s slice. such as: program dependence graphs (PDG). programs are composed of statements and variables. Henceforth. then the sequences of states that arise at point point in OriginalProgram and SliceProgram have identical values for all variables in VariableSet. where <point.

type of nodes. we will take them as example to discuss in the following part. Declaration part contains name. nodes entry are formalized by schema EntryNode. represented by state schema GenericArcs. Similarly.Arc_Shapes::=solid | solid_with_a_hash_mark | dashed | medium_boldface | heavy_boldface Similarly. shape. type. and their possible types can be described by the enumerated set Arc_Types. Firstly. whether an arrowhead is included or not. in order to specify whether an arc is directional or not. Ends::=plain | arrow Although there are a lot of different kinds of nodes. For the reason that program dependence graphs are the basic. initial state and final use of variables are formalized by schemas InitialStateNode and FinalUseNode respectively. they have some common features. etc. We define these common features as a generic node. nodes assignment and nodes controlpredicate are formalized by schemas AssignmentNode and ControlPredicateNode respectively. the source and target of dependence edges. too. Arc_Types::=loop_carried_flow_dependence|loop_ind ependence_flow_dependence | def_order_dependence | control_dependence In addition. Similarly. Declaration part contains name. direction of nodes. different kinds of edges also have some common features. we can add arrowhead to it. There are two possible types of end: plain and arrow. 259 . represented by state schema GenericNodes. shape. All kinds of nodes and edges are depicted respectively. We define these common features as a generic arc. dependence edges have type. Secondly.

“A survey of program slicing techniques”. on which our work in this paper is also based. ACM [1] 260 .16. All schemas in this paper are checked using the ZTC type-checker package [11] and Z User Studio[12]. Furthermore. which can be formalized by schema SlicingPDG. pp. Ottenstein K. The other lays its base on graph reachability algorithms[4]. “The program dependence graph and its use in optimization”.5. two methods of computing program slices have been developed.. which iteratively solves dataflow equations derived from inter-statement influence[1]. IEEE Transactions on Software Engineering. [2] Tip F. “Program slicing”. [3] Ferrante J. to perform type-checking). we are going to formalize control dependence edges. Warren J. 121-189. 3. Journal of Programming Languages. 498-509. D. One is the Weiser’s method. vol. 1995. On the basis of formalization of nodes and edges. no. no. Acknowledgements This research has been supported by the Natural Science Foundation of Jiangxi (Project No. Many researchers are doing work along the graph reachability Weiser M. After formalized various kinds of nodes. This formalization could be helpful in correct understanding of different types of slicing and also the correct application of a desired slicing regime in a rigorous way. This operation can still be performed in time linear in the size of the slice by performing a single traversal of the PDG starting from the set of points. 3. 1984. 3... there are a number of existing tools on the market which do manipulate Z specifications (for example. J. pp. we will formalize program dependence graphs by schema PDG.algorithms line. References So far. GJJ08353 and [2007] 434). Hence inference and verification of program slicing become possible. vol. which are formalized by schema ControlDependenceArc. Conclusions This paper represents a research effort towards the semantic formalization for program slicing technique using Z schema calculus. 2007GQS0495) and Science and Technology Foundation of the Education Department of Jiangxi (Project No.

Li Gan. Wu Fangjun. 2006. Yu Chuanjiang. Horwitz S. http://web. Binkley D. 12. September 2004. Harman M. User’s Guide. London: Prentice Hall. pp.43-52. [9] 261 .. [12] Miao Huaikou. “Z User Studio: an integrated support tool for Z specifications”. Shanghai: Shanghai Science and Technology Information Publishing House. pp.ac. December 2001. ZTC: A Type Checker for Z Notation. Version 2. The Z notation: a reference manual: second edition. August 1998. APSEC 2001. 319-349. International Standards Organization... pp. DePaul University. Jiangxi nanchang: Jiangxi university of finance & economics. ACM Transactions on Programming Languages and Systems. type system and semantics. no. Software engineering languageZ. Lili. vol.. 26-60. pp. no. School of Computer Science. vol. Zhu Guanming. KissAkos O L. no. M.03.2004.. 2002. Binkley D. “Interprocedural slicing using dependency graphs”. 1992. Spivey J. Yi Tong. ACM SIGPLAN Notices. 9. 1999. Wu Fangjun. Telecommunication.1990. Gyimothy T. Research on Z formal specifications slicing. pp.martin/zst andards/ [10] Miao Huaikou. 8. ISO/IEC 13568. Proceedings of 4th Workshop on Source Code Analysis and Manipulation (SCAM 2004).[4] [5] [6] [7] [8] Transactions on Programming Languages and Systems. Ming Jijun. 1. USA. “Slicing Z specifications”. 3. and Information Systems. 39-48. Division of Software Engineering.ox. vol. Reps T. Liu Ling. 1998.uk/oucl/work/andrew. Z formal specification notation-syntax.comlab. 437-444.1987. “Formalizing executable dynamic and forward slicing”. 39. Danicic S. [11] Jia X.

kernel method method GS-DS GDA as an abbreviation.. A numerous methods have been proposed to deal with the SSS problem of LDA [6-8] for resolving the optimal discriminant vectors in the range space of total scatter matrix and the null space of within-class scatter matrix.. The conclusions are drawn in Section 5.. the traditional approach is the use of the singular value decomposition (SVD) [2]. The GDA methods also suffers from this problem since the dimensionality of samples in feature space is much greater than the number of the samples. difference space. in this paper. The theoretical justifications of the proposed batch GDA and the classincremental GDA are presented in this paper. However. However. we have T (2) K = X Φ X Φ = R T Q T QR = R T R where K is a n × n kernel matrix which can be computed using kernel function as ( K ) ij = k ( xi .2009. We call the proposed 978-0-7695-3521-0/09 $25. In implementation. Neural Computation 18. GRAM-SCHMIDT ORTHOGONALIZATION IN FEATURE SPACE I. China.. and it is just done by computing the inner product with a kernel of two vectors in F function k ( x.4].1109/ICCET. orthogonalization.. the matrix X Φ can be expressed as X Φ = QR (1) where Q is column orthogonal matrix and R is an upper triangular matrix Because the columns of matrix Q are orthonormal.28 262 In kernel method. then upper triangular matrix R is positive definite and we obtain (3) X Φ R −1 = Q From above analysis. y ) = Φ ( x ) T Φ ( y ) [3.00 © 2009 IEEE DOI 10. by directly performing the Gram-Schmidt (GS) orthogonalization procedure [11] in the difference space (DS) [12] using the kernel trick. 210044 yunhuihe@163. II. the Gram-Schmidt orthogonalization in feature space is introduced in Section 2. Because there is no need to compute the mean of classes and the mean of total samples in the proposed method as needed in the traditional class-incremental GDA. From (2). the samples are transformed into an implicit higher dimensional feature space F though a nonlinear mapping Φ(x) . the LDA often encounters the small sample size (SSS) problem [5] when the dimensionality of samples is greater than the number of samples. we propose an efficient method for resolving the optimal discriminant vectors of Generalized Discriminant Analysis (GDA) and point out the drawback of high computational complexity in the traditional classincremental GDA [W. Φ( x n )] in feature space. x j ) K is symmetric positive semi-definite matrix. Zheng.. we propose an efficient method to resolving the optimal discriminant vectors of the batch GDA and class-incremental GDA respectively. For solving GDA. Let the sample matrix be which X = [ x1 . To overcome this drawback.. Zheng [10] proposed a numerically stable algorithm for batch GDA and class-incremental GDA in the case of the small sample size problem by applying only QR decomposition in feature space. In the next section. Keywords. Recently. If samples are linearly independent. Because there is no need to compute the mean of classes and the mean of total samples in GS-DS GDA as needed in the class-incremental GDA. “Class-Incremental Generalized Discriminant Analysis”. the algorithms in [10] are not optimal in terms of computational complexity because the mean of classes and the mean of samples must be computed before the QR decomposition is applied. xn ] X Φ = [Φ( x1 ). the implicit features vector in F dose not need to be computed explicitly.2009 International Conference on Computer Engineering and Technology Modified Class-Incremental Generalized Discriminant Analysis Yunhui He Department of Communications Engineering Nanjing University of Information Science and Technology Nanjing. we can see that performing the Gram-Schmidt orthogonalization in feature space is actually ..com Abstract—In this paper.4]. 979–1006 (2006)].class-incremental generalized discriminant analysis. INTRODUCTION The classical linear discriminant analysis (LDA) [1] was nonlinearly extended to the Generalized discriminant analysis (GDA) [2] by mapping the samples from input space to a high-dimensional feature space via the kernel trick [3. the SVD-based GDA algorithms suffer from the numerical instability problem due to the numerical perturbation [9]. the computational complexity is reduced greatly. the triangular matrix R can be obtained by performing Cholesky decomposition of K [11]. becomes By using QR decomposition. The batch GS-DS GDA algorithm and class-increment algorithms are proposed and the theoretical justification is proofed in Section 3 and 4 respectively. the computational complexity is thus reduced greatly. However.

Φ n Φ 2 Φ C Φ C The n −1difference vectors are computed as 1 Φ(d l1 ) = Φ( xl1+1 ) − Φ( x1 ) for class 1..Φ(dn2 ). these n −1 difference vectors span the difference space which equals the space of all eigenvectors corresponding to the nonzero eigenvalues of covariance matrix matrix S bΦ .. in this section. which thus leads to a great increase of computational complexity.Φ(dn1 −1 ). All samples are independent. it can be seen that the leftmost n − C vectors in (12) span the R( S w ) .Φ(bn2 −1 ). these ni − 1 difference vectors span the difference space of class i which equals the space of all eigenvectors corresponding to the nonzero eigenvalues (16) From (14) and (16)....9) in [10]) used for solving GDA are not optimal..Φ(b 2 1 2 n2 −1 )..equivalent to performing a Cholesky decomposition of the kernel matrix K ...Φ(d12 ). which leads to R(S ) = N (S ) = (∩ N (S )) = ∪ R(S ) i =1 i =1 Φ w Φ ⊥ w C (15) Φ i ⊥ Φ i (8) C Φ ( d n C ) − Φ ( d 1C ). III.... (x1 ). ki = 1...Φ(b ).C (10) { In the traditional class-incremental GDA algorithm.... (xnC )} ... Φ(d12 )..... by Since from (12) all n −1 vectors in (15) span the 263 ..........Φ(b12 ).. The between class scatter ki = 1. { (12) S bΦ = ∑ ni ( μ iΦ − μ Φ )( μ iΦ − μ Φ ) T Φ i i Sw = ∑∑(Φ( xm ) − μiΦ )(Φ( xm ) − μiΦ )T = ∑ SiΦ i i Φ Φ StΦ = ∑∑(Φ(xm ) − μΦ )(Φ(xm ) − μΦ )T = Sb + Sw i=1 m=1 i i SiΦ = ∑∑(Φ( xm ) − μiΦ )(Φ( xm ) − μiΦ )T m=1 m=1 Ni ni (4) C By comparing the definitions in (11) and (9)...... (14) C nC −1 Φ(b )..... because the mean of classes and the mean of total samples must be computed....2. Let the training set has C classes...... The ranks of Because Φ Φ S b . where ni samples and the number of total samples is n = ∑i=1 ni ...Φ(b C 1 )} from Eq. the column vectors (Eq. Φ R( StΦ ) and leftmost n − C vectors span R( S w ) ........ C . n1 − 1 i 1 i i Φ(dkii ) = Φ( xki ) − Φ( x1 ) = (Φ( xki ) − Φ(x1 )) + i 1 (Φ(x1 ) − Φ(x1 )) = Φ(bkii −1) + Φ(d1i ) where is covariance matrix of class i . THE BATCH GS-DS GDA ALGORITHM of covariance matrix S i of class further expressed as i ..