You are on page 1of 214

Advances in Intelligent Systems and Computing 268

Yong Soo Kim


Young J. Ryoo
Moon-soo Chang
Young-Chul Bae Editors

Advanced
Intelligent Systems
Advances in Intelligent Systems and Computing

Volume 268

Series editor
Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland
e-mail: kacprzyk@ibspan.waw.pl

For further volumes:


http://www.springer.com/series/11156
About this Series

The series “Advances in Intelligent Systems and Computing” contains publications on theory,
applications, and design methods of Intelligent Systems and Intelligent Computing. Virtually all
disciplines such as engineering, natural sciences, computer and information science, ICT, eco-
nomics, business, e-commerce, environment, healthcare, life science are covered. The list of top-
ics spans all the areas of modern intelligent systems and computing.
The publications within “Advances in Intelligent Systems and Computing” are primarily
textbooks and proceedings of important conferences, symposia and congresses. They cover sig-
nificant recent developments in the field, both of a foundational and applicable character. An
important characteristic feature of the series is the short publication time and world-wide distri-
bution. This permits a rapid and broad dissemination of research results.

Advisory Board

Chairman

Nikhil R. Pal, Indian Statistical Institute, Kolkata, India


e-mail: nikhil@isical.ac.in

Members

Emilio S. Corchado, University of Salamanca, Salamanca, Spain


e-mail: escorchado@usal.es

Hani Hagras, University of Essex, Colchester, UK


e-mail: hani@essex.ac.uk

László T. Kóczy, Széchenyi István University, Győr, Hungary


e-mail: koczy@sze.hu

Vladik Kreinovich, University of Texas at El Paso, El Paso, USA


e-mail: vladik@utep.edu

Chin-Teng Lin, National Chiao Tung University, Hsinchu, Taiwan


e-mail: ctlin@mail.nctu.edu.tw

Jie Lu, University of Technology, Sydney, Australia


e-mail: Jie.Lu@uts.edu.au

Patricia Melin, Tijuana Institute of Technology, Tijuana, Mexico


e-mail: epmelin@hafsamx.org

Nadia Nedjah, State University of Rio de Janeiro, Rio de Janeiro, Brazil


e-mail: nadia@eng.uerj.br

Ngoc Thanh Nguyen, Wroclaw University of Technology, Wroclaw, Poland


e-mail: Ngoc-Thanh.Nguyen@pwr.edu.pl

Jun Wang, The Chinese University of Hong Kong, Shatin, Hong Kong
e-mail: jwang@mae.cuhk.edu.hk
Yong Soo Kim · Young J. Ryoo
Moon-soo Chang · Young-Chul Bae
Editors

Advanced Intelligent Systems

ABC
Editors
Yong Soo Kim Moon-soo Chang
Daejeon University Seokyeong University
Daejeon Seoul
Korea Korea

Young J. Ryoo Young-Chul Bae


Mokpo National University Chonnam National University
Jeonnam Gwangju
Korea Korea

ISSN 2194-5357 ISSN 2194-5365 (electronic)


ISBN 978-3-319-05499-5 ISBN 978-3-319-05500-8 (eBook)
DOI 10.1007/978-3-319-05500-8
Springer Cham Heidelberg New York Dordrecht London

Library of Congress Control Number: 2014933114

c Springer International Publishing Switzerland 2014


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the
material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broad-
casting, reproduction on microfilms or in any other physical way, and transmission or information storage
and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known
or hereafter developed. Exempted from this legal reservation are brief excerpts in connection with reviews
or scholarly analysis or material supplied specifically for the purpose of being entered and executed on a
computer system, for exclusive use by the purchaser of the work. Duplication of this publication or parts
thereof is permitted only under the provisions of the Copyright Law of the Publisher’s location, in its cur-
rent version, and permission for use must always be obtained from Springer. Permissions for use may be
obtained through RightsLink at the Copyright Clearance Center. Violations are liable to prosecution under
the respective Copyright Law.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
While the advice and information in this book are believed to be true and accurate at the date of publication,
neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or
omissions that may be made. The publisher makes no warranty, express or implied, with respect to the material
contained herein.

Printed on acid-free paper

Springer is part of Springer Science+Business Media (www.springer.com)


Preface

Intelligent systems have been initiated with the attempt to imitate the human brain.
People wish to let machines perform intelligent works. Many techniques of intelligent
systems are based on artificial intelligence. According to changing and novel require-
ments, the advanced intelligent systems cover a wide spectrum: big data processing,
intelligent control, advanced robotics, artificial intelligence and machine learning. This
book focuses on coordinating intelligent systems with highly integrated and foundation-
ally functional components. This book consists of 19 contributions that feature social
network-based recommender systems, application of fuzzy enforcement, energy visu-
alization, ultrasonic muscular thickness measurement, regional analysis and predictive
modeling, analysis of 3D polygon data, blood pressure estimation system, fuzzy human
model, fuzzy ultrasonic imaging method, ultrasonic mobile smart technology, pseudo-
normal image synthesis, subspace classifier, mobile object tracking, standing-up motion
guidance system, recognition structure, multi-CAM and multi-viewer, robust Gaussian
Kernel, multi human movement trajectory extraction, and fashion coordination. This
edition is published in original, peer reviewed contributions covering from initial de-
sign to final prototypes and authorization.

To help readers understand articles, we describe the short introduction of each article as
follows;
1. “Qualitative Assessment of Social Network-Based Recommender Systems based
on Essential Properties”: This paper evaluates and assesses several social network-
based recommender systems in terms of robustness, trust, serendipity, diversity, privacy
preservation and scalability. It proposes that the observation and analysis can improve
the performance of various recommender systems respectively.
2. “Application of Fuzzy Enforcement to Complementarity Constraints in Nonlinear
Optimization”: This paper presents the application of fuzzy enforcement to comple-
mentarity constraints in nonlinear interior point method (NIPM) based optimization.
The fuzzy enforcement can provide enough room for the optimality, adequately satis-
fying complementarity constraints.
3. “iPhone as multi-CAM and multi-viewer”: This paper describes catching and watch-
ing the real-time images on iPhones or iPads using the WiFi networks. The
VI Preface

resolution of images and frame per second depends on the traffics of WiFi. These sys-
tems are widely applicable to home monitoring and baby caring.
4. “Robust Gaussian Kernel Based Approach for Feature Selection”: This article incor-
porates similarity margin concept and Gaussian kernel fuzzy rough sets. It optimizes
the Symbolic Data Selection problem. The advantage of this approach features robust
function.
5. “Multi Human Movement Trajectory Extraction by Thermal Sensor”: This paper
proposes a multi human movement trajectories (HMTs) extraction system with room
layout estimation by a thermal sensor. The sensor is attached to the ceiling and it ac-
quires 16 × 16 elements spatial temperatures – thermal distribution. The distributions
are analyzed to extract HMTs.
6. “An Energy Visualization by Camera Monitoring”: This paper proposes an energy
visualization system by a camera. The system applies edge detection and the connected-
component labeling to extract numeral regions in counters of a gas mater. Gas consump-
tion is estimated based on shape characteristics of numerals.
7. “Ultrasonic Muscular Thickness Measurement in Temperature Variation”: This pa-
per proposes a muscular thickness measurement method using acoustic velocity depen-
dency according to temperature. The authors employ a 1.0 MHz ultrasonic probe, and
acquire two kind ultrasonic echoes from same position of body with temperature varia-
tion.
8. “Regional Analysis and Predictive Modeling for Asthmatic Attacks in Himeji City”:
This article predicts the number of asthmatic attacks by a time series data analysis oc-
curred in the areas divided into the coastal place and the inland place in Himeji city.
9. “Analysis of 3D Polygon Data for Comfortable Grip Form Design”: This paper de-
scribes the method using 3D image processing techniques to extract some features, i.e.
positions/directions of fingers and relationships among them, from the 3D polygon data.
The research results show that gripping trends can be categorized into 5 classes and the
obtained features will be one effective for the mathematical models.
10. “Blood Pressure Estimation System by Wearable Electrocardiograph”: This paper
proposes a blood pressure estimation system based on electrocardiogram (ECG). The
ECG is unconstraintly measured by wearable sensor that provides acquired data to per-
sonal computer by wireless communication.
11. “A Fuzzy Human Model for Blood Pressure Estimation”: The paper describes a
blood pressure prediction model. The model predicts blood pressure of the subject based
on trend of the blood pressure, body weight and number of steps.
12. “A Fuzzy Ultrasonic Imaging Method for Healthy Seminiferous Tubules”: The au-
thors make cross-section images that consist of multiplying fuzzy degrees depending
on amplitude and frequency of line echoes. The images are healthy or unhealthy semi-
niferous tubules images (HSI or USI) that indicate distribution of healthy or unhealthy
seminiferous tubules.
13. “Ultrasonic Mobile Smart Technology for Healthcare”: This study designs the mo-
bile medical system to review data prior to patient access. Improved communication
can also make the process easy for patients, clinicians, and care-givers.
14. “Pseudo-normal Image Synthesis from Chest Radiograph Database for Lung Nod-
ule Detection”: The pseudo-normal image is synthesized from a database containing
Preface VII

other patient’s chest radiographs that have already been diagnosed as normal by medical
specialists. And then, the lung nodules are emphasized by subtracting the synthesized
normal image from the target image.
15. “Low-pass Filter’s Effects on Image Analysis using Subspace Classifier”: This pa-
per shows an effect for applying a low-pass filter on the performance of image analysis
using the subspace classifier. Analysis accuracies depend on if images are filtered or
not.
16. “A New Mobile Object Tracking Approach in Video Surveillance: Indoor Environ-
ment”: This paper deals with mobile tracking object indoors. A new mobile tracking
object approaches to the simple operation of extension and contraction on the object
window.
17. “Development of a Standing-up Motion Guidance System using an Inertial Sensor”:
This article presents a standing-up motion guide system for elderly and disabled peo-
ple. The system consists of a flexion phase, in which the center of gravity (COG) moves
forward, and an extension phase, in which COG raises upward. The proposed system is
evaluated highly as efficacy in supporting forward COG movement.
18. “A Structure of Recognition for Natural and Artificial Scenes; Effect of Horticul-
tural Therapy Focusing on Figure-Ground Organization”: This paper presents a solu-
tion of horticultural therapy for the elderly with depression symptom. The therapy in
perception-action cycle can enhance motivation, when subjects interact with natural
objects. Their experimental results demonstrated a significant difference of eye move-
ments in natural and artificial object cases.
19. “A Study on Fashion Coordinates Based on Clothes Impressions”: This paper pro-
poses the fashion coordinates generation system reflecting impressions expressed by
an image word. In order to construct the coordinates systems, there are three steps to
go through; the analysis of impressions of clothes, the analysis of impressions of the
combinations of outerwear and a shirt, and the generation method of initial coordinates
candidates.

We would appreciate it if readers could get useful information from the articles and
contribute to creating innovative and novel concept or theory. Thank you.

Editors
Yong Soo Kim
Young J. Ryoo
Moon-soo Chang
Young-Chul Bae
Contents

Qualitative Assessment of Social Network-Based Recommender Systems


Based on Essential Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Regin Cabacas, Yufeng Wang, In-Ho Ra
Application of Fuzzy Enforcement to Complementarity Constraints in
Nonlinear Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Hwachang Song
iPhone as Multi-CAM and Multi-viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Chen-Chia Chuang, Shun-Feng Su, Meng-Cheng Yang, Jin-Tsong Jeng,
Chih-Ching Hsiao, C.W. Tao
Robust Gaussian Kernel Based Approach for Feature Selection . . . . . . . . . . . 25
Chih-Ching Hsiao, Chen-Chia Chuang, Shun-Feng Su
Multi Human Movement Trajectory Extraction by Thermal Sensor . . . . . . . 35
Masato Kuki, Hiroshi Nakajima, Naoki Tsuchiya, Junichi Tanaka,
Yutaka Hata
An Energy Visualization by Camera Monitoring . . . . . . . . . . . . . . . . . . . . . . . 51
Tetsuya Fujisawa, Tadahito Egawa, Kazuhiko Taniguchi, Syoji Kobashi,
Yutaka Hata
Ultrasonic Muscular Thickness Measurement in Temperature Variation . . . 65
Hideki Hata, Seturo Imawaki, Kei Kuramoto, Syoji Kobashi, Yutaka Hata
Regional Analysis and Predictive Modeling for Asthmatic Attacks in
Himeji City . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Sho Kikuchi, Yusho Kaku, Kei Kuramoto, Syoji Kobashi, Yutaka Hata
Analysis of 3D Polygon Data for Comfortable Grip Form Design . . . . . . . . . . 85
Yuji Sasano, Hiroharu Kawanaka, Kazuyoshi Takahashi, Koji Yamamoto,
Haruhiko Takase, Shinji Tsuruoka
X Contents

Blood Pressure Estimation System by Wearable Electrocardiograph . . . . . . . 95


Tatsuhiro Fujimoto, Hiroshi Nakajima, Naoki Tsuchiya, Yutaka Hata
A Fuzzy Human Model for Blood Pressure Estimation . . . . . . . . . . . . . . . . . . 109
Takahiro Takeda, Hiroshi Nakajima, Naoki Tsuchiya, Yutaka Hata
A Fuzzy Ultrasonic Imaging Method for Healthy Seminiferous Tubules . . . . 125
Koki Tsukuda, Tomomoto Ishikawa, Seturo Imawaki, Yutaka Hata
Ultrasonic Mobile Smart Technology for Healthcare . . . . . . . . . . . . . . . . . . . . 137
Naomi Yagi, Tomomoto Ishikawa, Setsurou Imawaki, Yutaka Hata
Pseudo-normal Image Synthesis from Chest Radiograph Database
for Lung Nodule Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Yuriko Tsunoda, Masayuki Moribe, Hideaki Orii, Hideaki Kawano,
Hiroshi Maeda
Low-pass Filter’s Effects on Image Analysis Using Subspace Classifier . . . . . 157
Nobuo Matsuda, Fumiaki Tajima, Naoki Miyatake, Hideaki Sato
A New Outdoor Object Tracking Approach in Video Surveillance . . . . . . . . . 167
SoonWhan Kim, Jin-Shig Kang
Development of a Standing-Up Motion Guidance System Using an
Inertial Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
Chikamune Wada, Yijiang Tang, Tadahiro Arima
A Structure of Recognition for Natural and Artificial Scenes: Effect
of Horticultural Therapy Focusing on Figure-Ground Organization . . . . . . . 189
Guangyi Ai, Kenta Shoji, Hiroaki Wagatsuma, Midori Yasukawa
A Study on Fashion Coordinates Based on Clothes Impressions . . . . . . . . . . . 197
Moe Yamamoto, Takehisa Onisawa

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213


Qualitative Assessment of Social Network-Based
Recommender Systems Based on Essential Properties

Regin Cabacas, Yufeng Wang, and In-Ho Ra*

Information and Communications Engineering Department,


Kunsan National University
Miryong-dong, Jeollabukdo 573-701 South Korea
{rcabacas,ihra}@kunsan.ac.kr, ywang2@mail.usf.edu

Abstract. Prediction accuracy is the most common metric used to evaluate the
performance of traditional recommender systems. However, this might not be
applicable with Social Network-based Recommender Systems that uses social
connections in creating predictions and suggestions. Other important features
should be taken into account in implementing and evaluating them. This paper
evaluates and assesses several social network-based recommender systems,
in terms of robustness, trust, serendipity, diversity, privacy preservation and
scalability. From observation and analysis, we proposed suggestions that can
improve the performance of various recommender systems, respectively.

Keywords: social network, recommender systems.

1 Introduction

The rapid increase in the amount of information on the Web brings difficulty for In-
ternet users to obtain desired information. This problem becomes even worse if users
do not utilize appropriate search tools. In the past decade, different Recommender
Systems (RSs) are proposed to solve this problem. RSs in highly rated sites such as
Amazon, Netflix, TripAdvisor, Yahoo and YouTube have played an important role to
their success [1]. The key idea is to provide users with items that might be of interest
based on previous preferences, transactions and profiles, thus sound decisions can be
made.
The integration of social network opens a new field of research in recommender
systems. With a number of social networking Web sites such as Facebook, LinkedIn
and Twitter, it is most desirable to have a system application that could integrate in-
formation from these sources to provide customized recommendation for an individu-
al, a group or community. The idea of incorporating knowledge from social networks
(e.g. social influence, social interaction, etc.) originates from the fact that users are
often guided with the opinions and recommendations by their friends.

*
Corresponding author.

Y.S. Kim et al. (eds.), Advanced Intelligent Systems, 1


Advances in Intelligent Systems and Computing 268,
DOI: 10.1007/978-3-319-05500-8_1, © Springer International Publishing Switzerland 2014
2 R. Cabacas, Y. Wang, and I.-H. Ra

The first step in selecting an appropriate RS algorithm is to decide which proper-


ties of the application to focus upon [2]. Knowing the valuable properties to take into
account and understanding their effects will help designers realize the right recom-
mendation system approach and algorithm. A need for careful selection of property
should be given importance over the other. In this paper, we provide an overview of a
set of properties that are relevant for SNRS. We evaluated several SNRS in terms of
these essential properties. The remainder of this paper is as follows: section 2 dis-
cusses recommender systems; section 3 presents several existing SNRS methods;
section 4 describes essential properties of SNRS, section 5 shows the evaluation and
suggestions to improve the performance of SNRS and finally section 6 concludes the
paper.

2 Recommender Systems
RSs include software tools and techniques that provide suggestions for items that
might be of interest to a given user in the present or near future [1]. The subsections
describe RS functions, data sources and commonly used recommendation approaches.

2.1 Functions

RSs are multi-faceted applications commonly employed in an e-commerce site (i.e.


Amazon), entertainment sites, which includes movie or DVD recommender (i.e. Net-
flix, IMDb), services such as travel itinerary (i.e. Tripadvisor) and personalization
sites (i.e. Yahoo, Youtube). Authors in [1] argue that to understand recommendation
system function, designers should view the application on two perspectives namely:
service provider and user perspective. Here are some of the reasons why service pro-
viders employ RSs:

• increase the number of items sold


• sell more diverse items
• increase the user satisfaction
• increase user fidelity and
• better understand what the user wants.
On the other hand, in a user’s perspective RSs are used to:

• find some good items


• find all good items
• just browsing
• find credible recommender
• improve the profile
• express self
• help others and
• influence others
Qualitative Assessment of Social Network-Based Recommender Systems 3

2.2 Data Sources


Data sources are the bloodstream of RSs. Most of the time, they are the basis of creat-
ing recommendations. However in most cases, there are recommendation techniques
that are knowledge poor wherein simple and basic data are used such as user ratings
and evaluations for items. The following are common data sources used in RSs.
a. Items. Items are the objects that are recommended. Items may be characterized by
their complexity and their value or utility. The value of an item may be positive if the
item is useful for the user or negative if the item is not appropriate and the user made
a wrong decision when selecting it.
b. Users. Users are the person concerned of finding desirable items. Most of the RS
retains user profile that contains demographic data (e.g. gender, age) and user prefe-
rences.
c. Transactions. Transactions are referred to as recorded interactions between a user
and the RS. These are log-like data that store important information generated during
the human-computer interaction that are useful for the recommendation generation
algorithm that the system is using.

2.3 Approaches

Three main approaches are commonly used in RS namely: Content-Based (CB), Col-
laborative-Filtering (CF) and Hybrid.
a. Content-Based (CB): This is an approach that recommends items that are similar
to the ones a user have preferred in the past. This approach continues to collect users’
information and preferences and establishes a user profile. The similarity of items is
calculated based on the features associated with the user’s profile.
b. Collaborative Filtering (CF): This is an approach that recommends to user similar
items that other users liked in the past. CF systems can be classified into two sub-
categories: memory-based CF and model-based CF. Memory-based approaches make
predictions by taking into account the entire collection of previously rated items by a
user. Meanwhile, model-based approaches learn a model from collection of ratings
and use this model for making predictions.
c. Hybrid: This is an approach that is based on the combinations of the above men-
tioned approaches. It combines CB and CF, and in most cases it uses the advantages
of CF to fix the disadvantages of CB or vice versa. Authors in [3] enumerate different
ways to combine collaborative and content-based methods into a hybrid recommender
system and classified as follows:

• Separately implementing CB and CF methods and combine their predictions


• CB characteristics incorporated into CF approach
• Incorporating some CF characteristics into a CB approach and construction of
general model that incorporates both CB and CF characteristics.
4 R. Cabacas, Y. Wang, and I.-H. Ra

3 Social Network-Based Recommender System

SNRS makes use of the knowledge that can be obtained from social networks to im-
prove the recommendation process. This knowledge includes explicit and implicit
social interaction, social influence, trust and social behavioral patterns. Several papers
have verified the use of this knowledge in the success of the recommendation [4, 5,
and 6]. The following are several SNRS researches that are evaluated in this paper:
a. SOMAR (Social Mobile Activity Recommender)
SOMAR is a social network-based recommender system that recommends activities
based on a user’s social network, mobile phone data and sensor data in a ubiquitous
environment. It supports the user to filter and analyze activities by utilizing social
affinities and user interest.
b. FilmTrust
FilmTrust is a web recommender that combines social networks with movie ratings.
In this system users can read about movies, rate them, and write reviews. It uses trust
ratings within the social network as basis for making calculations about similarities.
c. GLOSS (Group Learning Sharing Own Contribution Search)
GLOSS is a search system that incorporates social network and provides recommen-
dation based on trust weight. It can find out several similar users, revise the trusting
weight, and find out potential trusting users using a feedback mechanism.
d. MyPopCorn
MyPopCorn is a Facebook movie RS application that uses unweighted social graph. It
requires explicit feedback for movies from the user. Its recommendations are generat-
ed from two implementations, one is provided by a traditional user-based RS, where
neighborhood is calculated among all users in the database and the other is provided
by the social graph where neighborhood is based on the set of active user’s friends.
e. SNS (Social Network-based Serendipity Recommender System)
SNS is a system that predicts and recommends items that have not yet been seen by
the active user but are of great interest and hard to search. It makes use of social net-
work interactions and access records of items to provide recommendations.

4 Evaluation

Prediction accuracy is the most common performance measure for RSs. Most of the
RSs designers put a lot of consideration in the accuracy of predictions whether the
approach and algorithm is tested offline or with real user interaction. A basic assump-
tion in a recommender system is that a system that provides more accurate predictions
will be preferred by the user [2]. However, works in [7, 8] argued that it is not only
the considerable factor to evaluate the total performance of a RS.
Qualitative Assessment of Social Network-Based Recommender Systems 5

Table 1. Comparison of SNRS Researches based on data sources, function and approach

Approach
Recommender Systems Data Sources RS Function
Used

SOMAR (Zanda et. al) Facebook, Mobile Data, Sensor Data Movie Hybrid

FilmTrust (Golbeck and Own Data Set(User Profile, Prefe- Professional Collaborative
Hendler) rences, Ratings, Feedbacks) academic search Filtering

GLOSS User Data(User Profile, Social Collaborative


GLOSS (Zhang et. al) Movie
Network Information) Filtering

MyPopCorn (de Mello Facebook, User profile, GroupLens Collaborative


Movie
Neto and Nowe) Data Set Filtering

Social network interaction, relationship, Social Network Collaborative


SNS (Chiu et. al)
Access Record, MovieLens Data Set Activity Filtering

In this section we compared and assessed SNRS based on a set of properties. We


focused on assessing properties such as robustness, trust, serendipity, diversity, priva-
cy preservation and scalability and how they affect the success of the recommenda-
tion. Several researches point out the following factors that contribute to the total
performance of RSs [2].

a. Robustness
This refers to the stability of the RS in the presence of fake information and attacks.
These attacks are commonly in form of profile injections which are made to promote
the value of a certain item amongst others.
Robustness measures the performance of the system before and after an attack to
determine how it affects the system as a whole. Authors in [9] conducted an experi-
ment in determining the effect of attack models in CF algorithms. Average prediction
shift is one of the common measures used in evaluating the robustness of a RS [10].
This measure refers to the change in an item’s predicted rating before and after an
attack on averaged overall predictions or over predictions that are targeted by the
attack. Equation 1 and 2 shows the formula for average prediction shift for an item i
for over all users and average prediction shift for all items respectively. Table 2 shows
the assessed robustness of each SNRS. High robustness indicates that the system will
still provide accurate predictions even threats on data is inherent.

∆ ∑ ∆ , /| | (1)

∆ ∑ ∆ , /| | (2)
6 R. Cabacas, Y. Wang, and I.-H. Ra

Table 2. Assessed Robustness of SNRS

Recommender Systems Robustness

SOMAR (Zanda et. al) High


FilmTrust (Golbeck and Hendler) Medium
GLOSS (Zhang et. al) Medium
MyPopCorn (de Mello Neto and Nowe) Medium
SNS (Chiu et. al) Low

b. Trust
Trust is the measure of willingness to believe in a user based on his competence and
behavior within a specific context at a given time. Humans usually retain a mental
map of the level of trust towards a friend’s advice.
Work in FilmTrust, GLOSS and MyPopCorn uses trust ratings in a social network
as basis for making calculations about similarity. It relies on the notion that there must
be a correlation between trust and user similarity. Work in [5] verifies this correlation
in an empirical study of a real online community. Work in SOMAR and SNS does not
include trust among users in the social network rather social network interaction is
given the degree of importance in the recommendation process.
In detail, the social networking component of FilmTrust requires users to provide a
trust rating for each person added as a friend. With the collected trust values, they use
TidalTrust, a trust network inference algorithm, as basis for generating predictive
ratings personalized for each user. In their experiment, the accuracy of the recom-
mended ratings outperforms both simple average rating and the ratings produced by a
common RS algorithm. Table 3 shows the use of trust in SRNS.

Table 3. Usage of Trust in SNRS

Use of
Recommender Systems
Trust
SOMAR (Zanda et. al) No
FilmTrust (Golbeck and Hendler) Yes
GLOSS (Zhang et. al) Yes
MyPopCorn (de Mello Neto and Nowe) Yes
SNS (Chiu et. al) No

c. Serendipity
Serendipity is the measure of how surprising the successful recommendations are [2].
It is the amount of information that is new to the user in a recommendation or basical-
ly labeled as the “not obvious” items in the recommendation. Several works on seren-
dipitous recommendation shows that serendipitous items exist in recommendation
Qualitative Assessment of Social Network-Based Recommender Systems 7

lists of different items in different categories than in the lists of similar items. Fur-
thermore, authors in [2] proposed a recommendation method to increase diversity of
recommendation lists. In this paper, we aim to identify the need of each SNRS for
serendipitous recommendation. Table 4 shows the need for serendipity of each SNRS.

Table 4. Assessed need of serendipitous recommendation

Serendipity of
Recommender Systems
Recommendations
SOMAR (Zanda et. al) Medium
FilmTrust (Golbeck and Hendler) Medium
GLOSS (Zhang et. al) Low
MyPopCorn (de Mello Neto and Nowe) Medium
SNS (Chiu et. al) Low

d. Diversity
Diversity is a quality of result lists that helps cope with ambiguity. Diversity generally
applies to a set of items that is related to how different the items are with respect to
each other. Studies in [7, 11] introduced the topic of diversification method to balance
and diversify personalized recommendations lists in order to reflect the user’s com-
plete spectrum of interests. A trade-off in improving the diversity characteristics of a
fixed-size recommendation list is sacrificing its prediction accuracy [11]. Table 5
shows the summary of the need of SNRS for diversification.

Table 5. Assessed need of diversity

Recommender Systems Diversity

SOMAR (Zanda et. al) Highly Desirable


FilmTrust (Golbeck and Hendler) Highly Desirable
GLOSS (Zhang et. al) Not necessarily needed
MyPopCorn (de Mello Neto and Nowe) Highly Desirable
SNS (Chiu et. al) Not necessarily needed

e. Privacy Preservation
Privacy is a critical issue for users. They are reluctant to provide personal details for
fear of misuse, and RS administrators are concerned about the legal issues associated
with protecting user privacy.
The use of OSN data (i.e. Facebook) is subject to the preservation of user’s
privacy. Authors in SOMAR handle privacy preservation by implementing the rec-
ommendation process in situ or within the user’s location, in the mobile phone of the
8 R. Cabacas, Y. Wang, and I.-H. Ra

active user, leaving no traces of passing valuable data to a third party entity. We take
into consideration two factors that contribute to privacy preservation in SNRS name-
ly: transparency and anonymity.

• Transparency

Users question themselves about the reason behind a recommendation. They are more
inclined to accept and evaluate the recommendations better once they understand how
an item has been suggested to them.
Authors in [5] have evaluated the role of transparency to the accuracy of recom-
mender systems. They suggest that RSs are mostly not used in high-risk decision-
making because of a lack of transparency. The applicability of transparency relies on
the domain or function of the RS. Transparency is most likely beneficial for RS such
as recommending travel itinerary, investment and real estate. However, most SNRSs
mentioned above still operates as a set of black box, leaving the recommending
process to the system and never letting the user know how it comes up with that.
These SNRS are mainly low level domain type and transparency would likely be not
present. However, MyPopCorn users are aware how recommended items are derived
either from the user’s preferences or similarities with other users or friends.

• Anonymity

Being anonymous in an OSN is hard to imagine especially in SNRS where user pro-
files are stored and used in prediction. Anonymity is associated with transparency. If a
user can see how the recommendation has been calculated there would be instances of
exploitation of users’ data. Data connected with the recommendation can be used
maliciously. Like for example in the case of MyPopCorn, friends’ similarities could
be seen once a recommendation is given to the user. User data obtained from the infe-
rence process in recommender systems could be used by perpetrators to commit crime
such as harassment, burglary and identity theft. Table 6 summarizes the privacy risk
based on the evaluation of transparency and anonymity in SNRS.

Table 6. Privacy risk rated as Low, Medium or High effect on user with risk factors

Recommender Systems Privacy Risk

SOMAR (Zanda et. al) Low (in situ processing)


FilmTrust (Golbeck and Hendler) Medium(Social interactions)
GLOSS (Zhang et. al) Medium(Social interactions)
Medium(User profile and
MyPopCorn (de Mello Neto and Nowe)
Social Interaction)
Low (Experimental data set
SNS (Chiu et. al)
used)
Qualitative Assessment of Social Network-Based Recommender Systems 9

f. Scalability
With exponentially increasing users and items, SNRSs or any RSs will likely to suffer
serious scalability problems. Social network users normally have hundreds to thou-
sands of friends making the computation of similarity complicated.
In [12], authors stated that users are connected with other users but they do not in-
teract all the same. Users only interact with a small group of friends, normally the
closest in the social network structure. SOMAR, GLOSS and MyPopCorn represent
this connection in a social graph.

Table 7. Use of Social Graph

Use of Social
Recommender Systems
Graph
SOMAR (Zanda et. al) Yes
FilmTrust (Golbeck and Hendler) No
GLOSS (Zhang et. al) Yes
MyPopCorn (de Mello Neto and Nowe) Yes
SNS (Chiu et. al) No

In GLOSS, authors state that similarities between friends are in average higher
than those non-connected users. This suggests that focusing on the social graph as
representative relationship instead of the user’s whole social network structure is
applicable. Narrowing the data set as used in SOMAR and MyPopCorn solves the
problem of scalability. Furthermore, authors in [5] suggest that there is a similarity in
focusing on the immediate friends and using the distant friends in the social network
as shown in Table 8.

Table 8. Result of Mean Absolute Error (MAE) in prediction using with or without distant
friend inference

Type MAE

With Distant Friend Inference 0.716


Without Distant Friend Inference 0.682

5 Suggestions and Future Work

The popularity of using CF algorithms in SNRS is conclusive that this algorithm is


well suited for most SNRS. Authors in [9] conducted and experiment and have dem-
onstrated the relative robustness and stability of model-based CF algorithms over the
memory-based approach. This could be basis of using model-based algorithms over
SNRS when its focus is its robustness. Robustness should be highly regarded since
10 R. Cabacas, Y. Wang, and I.-H. Ra

profile injection especially for OSN data is prevalent. Certain measures should be
done to ensure the credibility of the data being used and no biased items are added
that could affect the output of the predictions.
In the evaluation, the use of weighted trust in the social network structure is essen-
tial in improving the recommendation process. However, SNRS listed in Section 2 are
only considering the calculation of trust from the frequency of social network interac-
tion and explicit rating with mostly no consideration of other social context of trust
(e.g. similarity of preferences, proximity of location, community impact factors or
reputation). We observed these factors as important to improve the usage of trust in
SNRS. FilmTrust could also integrate the use of real-time interactions that can be
acquired in any OSN (i.e. Facebook). FilmTrust and SNS could also make use of
social graphs as an advantage to enhance their recommendation process.
Serendipity and diversity of recommendation varies with the domain of the RS. On
specific domains such as academic research like in GLOSS, serendipity should be of
low importance over item similarity or prediction accuracy. However, for entertain-
ment domain such as movie recommender a serendipitous recommendation is highly
desirable to present to the users. Diversity could also be a trade-off with prediction
accuracy. The dissimilarity of items could be beneficial in some SNRS but not with
others. RSs that try to promote items (products, movie, etc.) or events are most likely
in need of diversification.
Privacy preservation is lightly tackled in the evaluated set of SNRS. The subject of
transparency with the calculation of prediction creates an advantage and disadvantage.
It can be used advantageously when transparency create a desirable impact on a user’s
belief and acceptance of the recommendations. However, it would be a disadvantage
if it is used maliciously. Alongside transparency user’s information (e.g. user’s pro-
file) is implicitly included in the recommendation. Transparency and anonymity
should be addressed by explicitly stating the privacy options that a user can have on
using the RS. A user should know what data are to be shared and not.
Social neighborhood connections can derive assumptions about a new user’s taste,
and network of friends can derived specific interest that could be relevant to the active
user. Direct or close friends could already be a sufficient social graph data to use in
prediction and this could shorten the computation time of making recommendations,
which answers scalability problems.

6 Conclusion

In this paper, we have provided an overview of RS, in terms of function, data sources
and techniques. We evaluated existing SNRS methods in several domains based on
performance measurements i.e. robustness, trust, serendipity, diversity, privacy pre-
servation and scalability. These properties are as essentials as prediction accuracy.
However, for the evaluated SNRS most of these properties are not incorporated. A
trade-off on prioritizing one property over the other may apparently occur and incor-
porating all of the properties could be proven effective but would make SNRS more
complex. Furthermore, we seek to apply the proposed suggestions to improve future
SNRS.
Qualitative Assessment of Social Network-Based Recommender Systems 11

Acknowledgements. This research was supported by Basic Science Research Pro-


gram through the National Research Foundation of Korea (NRF) funded by the
Ministry of Education, Science and Technology (2013054460).

References
1. Ricci, F., Rokach, L., Shapira, B.: Introduction to Recommender Systems Handbook. In:
Recommender Systems Handbook, pp. 1–29 (2011)
2. Shani, G., Gunawardana, A.: Evaluating recommendation Systems. In: Recommender Sys-
tems Handbook, pp. 257–297. Springer US (2011)
3. Kanna, F., Mavridis, N., Atif, Y.: Social Networks and Recommender Systems: A World
of Current and Future Synergies. In: Computational Social Networks, pp. 445–465. Sprin-
ger London (2012)
4. Golbeck, J.: FilmTrust: Movie Recommendations from Semantic Web-based Social Net-
works. In: IEEE CCNC Proceedings (2006)
5. He, J., Chu, W.: A Social Network-Based Recommender System (SNRS). Doctoral Disser-
tation. University of California (2010)
6. Bellogín, A., Cantador, I., Castells, P., Diez, F.: Exploiting Social Networks in Recom-
mendation: a Multi-Domain Comparison. In: Dutch-Belgian Information Retrieval Work-
shop, The Netherlands (2013)
7. Onuma, K., Tong, H., Faloutsos, C.: TANGENT: a novel, ’Surprise me’, Recommendation
Algorithm. In: Proceedings of the 15th ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining, France (2009)
8. Fouss, F., Saerens, M.: Evaluating Performance of Recommender Systems: An Experi-
mental Comparison. In: International Conference on Web Intelligence and Intelligent
Agent Technology, vol. 1, pp. 735–738 (2008)
9. Mobasher, B., Burke, R., Bhaumik, R., Williams, C.: Toward Trustworthy Recommender
Systems: An Analysis of Attack Models and Algorithm Robustness. ACM Transactions on
Internet Technology 7 (2007)
10. Hurley, N.: Tutorial on Robustness of Recommender Systems. In: ACM RecSys (2011)
11. Bradley, K., Smyth, B.: Improving Recommendation Diversity. In: 12th Irish Conference
on Artificial Intelligence and Cognitive Science, pp. 85–94 (2001)
12. Zanda, A., Menasalvas, E., Eibe, S.: A Social Network Activity Recommender System for
Ubiquitous Devices. In: Proceedings of 11th International Conference on Intelligent Sys-
tems Design and Applications, pp. 494–497 (2011)
Application of Fuzzy Enforcement to Complementarity
Constraints in Nonlinear Optimization

Hwachang Song

Dept. of Electrical and Information Engr., Seoul Nat’l University of Science & Technology
232 Gongreung-ro, Nowon-gu, Seoul 139-743, Korea
hcsong@seoultech.ac.kr

Abstract. This paper presents the application of fuzzy enforcement to


complementarity constraints in nonlinear interior point method (NIPM) based
optimization. The fuzzy enforcement can provide enough room for the
optimality, adequately satisfying complementarity constraints.

Keywords: complementarity constraints, fuzzy enforcement, nonlinear interior


point methods, nonlinear optimization.

1 Introduction
This paper presents the application of fuzzy enforcement to complementarity
constraints (CC) as a form of inequality ones for nonlinear interior point method
based optimization. Fuzzy enforcement was originally proposed in [1], but it was for
general equality and inequality constraints in successive linear programming
algorithm. Introducing fuzzy enforcement can adequately deal with the concept of
“not too much” violating complementarity conditions, providing enough room for
solutions to move to the optimality.

2 Fuzzy Enforcement of Complementarity Constraints


The formulation of nonlinear programming problems with the CC of interest in this
paper can be briefly expressed as follows:
min f ( x)
s.t. g ( x) = 0
hmin ≤ h( x) ≤ hmax (2)
(ci ( xi ) - α i )(xi - β i ) = 0
ci ( xi ) - α i ≥ 0, xi - β i ≥ 0,
i = 1,..., kc

where x is the vector including control and dependent variables. In (2), f(·) is the
objective function; g(·) and h(·) are function vectors for equality and inequality

Y.S. Kim et al. (eds.), Advanced Intelligent Systems, 13


Advances in Intelligent Systems and Computing 268,
DOI: 10.1007/978-3-319-05500-8_2, © Springer International Publishing Switzerland 2014
14 H. Song

constraints, respectively; hmin and hmax denote the lower and upper limits of h(·); xi
stands for i-th variable of x, involving in the CC; c(xi)-αi and xi-βi are the functions for
the complementarity conditions and they are non-negative; kc is the number of CC in
the problem. Based on the condition where factors of the CC are non-negative, the
equivalent inequality constraint can be made as follows:
(ci ( xi ) - α i )(xi - βi ) ≤ 0 (3)

When applying interior point method (NIPM), which was applied to several
engineering problems [2-6], to the optimization problem with this inequality form of
CC, it is possible to use the same dimensional correction equations as for the
nonlinear optimization problem without the equality form of CC. However, NIPM
takes log barrier functions forcing the solution in the whole procedure within the
feasible region, and hence the solution cannot move from the initial vector of x to find
better solutions with respective to optimality and feasibility. Thus, a facilitating
technique to provide enough room for moving solutions might be needed, considering
the condition of CC.
Let the i-th CC function in (3) cci(xi). The fuzzy set theory [7] can be applied to the
CC because during the solution process of nonlinear interior point method (NIPM),
“not too much” violation of CC might be acceptable. With the fuzzy relation, the
inequality form of CC can be written as:
~
cci ( xi ) ≤ 0 (4)

Each fuzzy relation, in the fuzzy set theory, is associated with a membership
function which represents the degree of certainty. The membership function for (4)
can be expressed as follows:
 1, cci ( xi ) ≤ 0

μ i (cc( xi ) ) = (ε i − cci ( xi ) ) / ε i , 0 ≤ cci ( xi ) ≤ ε i (5)
 cci ( xi ) > ε i
 0,
where, εi stands for the acceptable limit of violating the i-th CC during the solution
process.
To employ the fuzzy enforced CC for the optimization problem, the degree of
satisfaction should be enhanced in the NIPM solution process, and for that purpose,
the optimization problem can be re-written with a multi-objective function as follows:

min f ( x ) − acc  zi
i

s.t. g( x ) = 0
(6)
h min ≤ h( x ) ≤ h max
zi ≤ μi (cci ( xi ))
i = 1,..., kc

where acc is a weighting factor for the term of maximizing zi, which is the lower limit
of the membership function of the i-th CC. In (6), h(·) is the function vector for the
Application of Fuzzy Enforcement to Complementarity Constraints 15

inequality constraints including the non-negativity condition of CC factors, and hmin


amd hmax denote the lower and upper limit vectors of h(·).
The second term of the objective function in (6) forces each zi to the reachable
maximum value of its membership function, and zi is the lower limit of the
membership function. From (6), one can notice that the selection of acc is quite
important. As in the membership function, the maximum value of zi is 1. If the slope
of original objective function, f(x), is much higher than acc, then zi gets close to 1 and
hence the infeasibility of the CCs might not be acceptable. Thus, it would be better to
keep acc around the maximum value of f(x) during the solution process.

3 Conclusion
This paper presents a fuzzy set method for enforcing CC in nonlinear interior point
method (NIPM) based optimization. The method of fuzzy enforcing has been
implemented for CC, which can be incorporated in optimization problems for real
applications

References
1. Liu, W.-H.E., Guan, X.: Fuzzy constraint enforcement and control action curtailment in an
optimal power flow. IEEE Trans. Power Systems 11, 639–644 (1996)
2. Frisch, K.R.: The logarithmic potential method of convex programming. University Institute
of Economics, Oslo (1955)
3. Karmarkar, N.: A new polynomial-time algorithm for linear programming. Combinatorica 4,
373–395 (1984)
4. Wei, H., Sasaki, H., Kubokawa, J., Yokoyama, R.: An interior point nonlinear programming
for optimal power flow problems with a novel data structure. IEEE Trans. Power
Systems 13, 870–877 (1998)
5. Song, H., Lee, B., Kwon, S.-H., Ajjarapu, V.: Reactive reserve-based contingency
constrained optimal power flow (RCCOPF) for enhancement of voltage stability margins.
IEEE Trans. Power Systems 18, 1538–1546 (2003)
6. Song, H., Dosano, R., Lee, B.: Power system voltage stability classification using interior
point method based support vector machine (IPMSVM). International Journal of Fuzzy
Logic and Intelligent Systems 9, 238–243 (2009)
7. Zimmerman, H.J.: Fuzzy set theory and its application, 2nd edn. Kluwer Academic
Publishers (1991)
iPhone as Multi-CAM and Multi-viewer

Chen-Chia Chuang1, Shun-Feng Su2, Meng-Cheng Yang1,


Jin-Tsong Jeng3, Chih-Ching Hsiao4, and C.W. Tao1
1
Department of Electrical Engineering, National Ilan University, Taiwan
2
Department of Electrical Engineering,
National Taiwan University of Science and Technology, Taiwan
3
Department of Computer Science and Information Engineering,
National Formosa University, Taiwan
4
Department of Electrical Engineering, Kau Yuan Unversity, Taiwan
ccchuang@niu.edu.tw

Abstract. Recently, some of applications (APP) about the web of camera sys-
tems have been proposed for iPhone. The web of camera system is built on iOS
smart mobile devices, and the objective-c programming language is employed
to code applications in the Xcode. The iOS mobile devices are usually equipped
with network and camera. Thus, it only needs to design software on the integra-
tion and links so as to replace the traditional webcam. The proposed system can
be provides four iPhones or iPads to catching and watching the current images
by the WiFi networks. In addition, the resolution of images and frame per
second are also adjusted by the traffics of WiFi networks. In this study, the pro-
posed system can be used as various applications such as home monitoring sys-
tem and baby monitor system. The advantage is watch anytime, anywhere. And
the mobile devices as mobile camera position can change location.

Keywords: iOX, multi-CAM, APP.

1 Introduction

Recently, the intelligent mobile devices are quickly growth up. Then, various applica-
tions (APP) in different mobile operation systems (O.S.) are developed. It is now
widely used in multi-business service systems and interpersonal communication.
However, the intelligent mobile device performance is accomplished by embedding
corresponding applications that are developed in respective operating systems. Mobile
devices operating systems: Symbian, Windows Mobile, iOS [1-3], Linux (with An-
droid, Maemo, and WebOS), Palm OS and BlackBerry OS. Because of APPs in Ap-
ple store more than others OS, we chose the iOS as the platform for our proposed
system. iOS is originally designed for iPhone, and then is applied to the iPod touch,
iPad, and Apple TV products. Just like other products that based on Mac OS X oper-
ating system, it is also based on Darwin foundation and Unix-like operating systems.
The iOS system architecture is divided into four layers: the Core OS layer, the Core
Services layer, the Media layer, and the Cocoa Touch layer. iOS has the app store that

Y.S. Kim et al. (eds.), Advanced Intelligent Systems, 17


Advances in Intelligent Systems and Computing 268,
DOI: 10.1007/978-3-319-05500-8_3, © Springer International Publishing Switzerland 2014
18 C.-C. Chuang et al.

is popular and well-managed. It can provide objective data for user testing on applica-
tions, like this study. That will facilitate the performance evaluation and improvement
of applications.
In the past, one APP about “iPhone as web CAM” has been proposed. However, it
is only provides a single catching the images and watching the images for others iOS
based devices. Then, the frames/second is adjusted by the traffic of WiFi networks. In
the most case, this APP can be works well. In this APP, only one CAM is used to
catching the image. Others Apple mobile devices are used as viewer. In this study, the
iPhone as multi-CAM and multi-viewer is proposed. In the proposed system, four
CAM and one viewer is used as monitor a house, elder/baby care and house security.
Furthermore, to ensure the images are fluently transferred, the resolution of the im-
ages can be adjusted according to the current transmission rate of the wireless LAN.
Note that the proposed system only run in the wireless WiFi environment.
This article is organized as follows. Section 2 describes the developed tools. In the
section 3, the system blocks are described. Some of experiment results are provided in
section 4. Section 5 concludes the paper.

2 Related Tools

SDK
Software Development Kit (SDK) [7] is used by software engineers for specific soft-
ware package, software framework, hardware platform, operating system, application
software development tools to establish a collection. In general, the tools include
debugging and other purposes. SDK often includes sample code, and support of anno-
tations or other supporting documentation that clarify areas of doubt as the basic ref-
erence. The software engineers usually obtain software development kit from the
target system developers. In order to encourage developers to use the system or lan-
guage, many of the SDK is provided in free of charge. SDK might be attached with
the development permits that make SDK cannot be used in incompatible environment.
For example, a proprietary SDK may conflict with the development of free software.
WiFi
Some of devices in the WiFi environment are stated as follows
• Station: It is a basic component of the network.
• Basic Service Set (BSS): It is a basic service component of the network. The most
simple service component may consist of only two sites. Sites can dynamically
join (associate) to the basic services component.
• Access Points (AP): a device to connect to a wireless computer network.
• Extended Service Set (ESS): is a set of two or more interconnected wireless BSSs
that share the same SSID (network name), security credentials and integrated
(providing translation between 802.3 and 802.11 frames) wired local area net-
works that appear as a single BSS to the logical link control layer at any station
associated with one of those BSSs which facilitates mobile IP and fast secure
iPhone as Multi-CAM and Multi-viewer 19

roaming applications; the BSSs may work on the same channel, or work on differ-
ent channels to boost aggregate throughput.
• Basic service set identification (BSSID): Each BSS is uniquely identified by a
BSSID. The BSSID is the MAC address of the wireless access point (WAP) gen-
erated by combining the 24 bit Organization Unique Identifier (the manufacturer's
identity).
Xcode
Xcode [8] is an integrated development environment of Mac OS X application that is
provided for developers. Registered developers can download preview releases and
previous versions of the suite through the Apple Developer website. The Xcode pre-
decessor inherited from NeXT, Project Builder. The Xcode suite includes the GNU
Compiler Collection free software (GCC, apple-darwin9-gcc-4.0.1 and apple-darwin9
- gcc-4.2.1) and supports the C language, C + +, Fortran, Objective-C, Objective-C +
+, Java, AppleScript and Python, It also provides Ruby, Cocoa, Carbon, and Java
programming model. Partners also provide GNU Pascal, Free Pascal, Ada, C Sharp,
Perl, Haskell and D language. Xcode suite uses GDB debugging tools as its back-
ground.
Objective - C
Objective-C [9-11] is a general-purpose, high-level, object-oriented programming
language. Smalltalk-style messages pass mechanisms to ANSI C, and expansion of
the standard ANSI C programming language. It is Apple's OS X and iOS operating
system and its associated API, Cocoa and Cocoa Touch programming language.
Objective-C was originally derived from the NeXTSTEP system, and later inherited
by the OS X and iOS. It provides compiler GCC and Clang. Clang is used in the latest
Xcode. C ++ type and methods are very strict and clear. A method must belong to a
type, and had been tightly bound at the compile time. A non-existent type method
cannot be called in traditional environments, but not in Objective-C. To type messages
is relatively loose. To call methods is similar to send a message to the object. All
methods are regarded as a response to a message. All message processing until the
execution will be dynamically determined and according to the type, it will decide
how to handle the messages received.

3 The Proposed System Blocks


The propose system blocks is shown in figure 1. The system block is stated as follows:

Checking WiFi Status: Notify users to active wifi when the application is enabled.
Camera: Open camera as a host.
Open camera wait for pair connection: the host will wait for a client device to
pair connection.
Start to transmit a stream: After pairing, the host is starting to transmitting a
stream.
Stop connect: It will stop the connection if anyone press return key or the de-
vices are out of wifi range.
20 C.-C. Chuang et al.

Fig. 1. The proposed system blocks are show

Fig. 2. Notify users to active WiFi when Fig. 3. Select menu screen is shown
APP is enabled
iPhone as Multi-CAM and Multi-viewer 21

Fig. 4. Select a device to watch when Fig. 5. Select the video below the bar can be
iPhone as CAM is selected switched and watch its video

Viewer: Open as a client and users can choose available streams.


Select peer to watch video: Select a device to watch under the same domain
network.
Start to watching: Start to receive streams from the host.
Stop connect: It will stop the connection if anyone press return key or the de-
vices are out of wifi range.
Setting: Some of parameters (video resolution, frames and compression ration) can
be adjusted.
Notify users to active WiFi when the application is enable in Figure 2. In Figure 3,
the menu of the proposed APP is shown. Users can be selects your Apple mobile de-
vices as CAM or viewer. When the viewer is selected by users, the monitored views
(i.e. Apple mobile devices as CAM) are shown in the bottom. This situation is shown
in Figure 4. Select the video below the bar can be switched and amplified to watch the
video in Figure5.

4 Experiment Results
Firstly, the relationship between the frames per seconds (fps) and the compression
ratio of images are considered. In this case, the new iPad and iPad 2 are selected as
22 C.-C. Chuang et al.

viewer and CAM, respectively. Some of results are tabulated as Table 1. When the
compression ratio of images is large than 70% for the 480X360 image size, the delay
situation is appeared. Secondly, the iPad2 and new iPad are also used to testing the
proposed APP. The bandwidth of wireless AP is 40mbps. In this situation, the differ-
ent pixels (image resolution) and mobile devices are considered and tabulated as

Table 1. The relationship between fps and Compression Ratio

FP Compression
Video Resolution Delay
S Ratio
480×360 20 0.5 0 sec
480×360 22 0.5 0 sec
480×360 24 0.5 0 sec
480×360 20 0.6 0 sec
480×360 22 0.6 0 sec
480×360 24 0.6 0 sec
480×360 20 0.7 1~2 sec
480×360 22 0.7 2 sec
480×360 24 0.7 2~3 sec
480×360 20 0.8 3~4 sec
480×360 22 0.8 4 sec
480×360 24 0.8 5 sec

Table 2. When the new iPad and iPad2 are used, the delay time are tabulated

Video Resolu- Connected


Delay Camera/Viewer
tion units
192×144 1 0 sec 1 new iPad/1 iPad2
192×144 2 0 sec 2 new iPad/1 iPad2
192×144 3 0 sec 3 new iPad/1 iPad2
192×144 4 0 sec 4 new iPad/1 iPad2
352×288 1 0 sec 1 new iPad/1 iPad2
352×288 2 0 sec 2 new iPad/1 iPad2
352×288 3 0 sec 3 new iPad/1 iPad2
352×288 4 0 sec 4 new iPad/1 iPad2
480×360 1 NA 1 new iPad/1 iPad2
480×360 2 NA 3 new iPad/1 iPad2
480×360 3 NA 3 new iPad/1 iPad2
480×360 4 NA 4 new iPad/1 iPad2
iPhone as Multi-CAM and Multi-viewer 23

Table 2. In the most case, the proposed APP is work well. However, the screen has a
slightly delay situation as 352X288 and 4 connected devices are used. Because of the
CPU of iPad2 can’t process for large images, the proposed APP can be abnormal
closed when 480X360 and 2 connected devices are used. Secondly, the iPhone5 and
new iPad are also tested. In this situation, the different pixels (image resolution) and
devices are considered and tabulated as Table 3. In the most case, the proposed APP is
also works well. However, the screen has a more delay situation as 352X288 and 4
connected devices are used. If the testing mobile devices are all chosen as iPhone 5,
we think that the delay time can be eliminated. In the future, the questions about delay
time and image resolution should be overcome.

Table 3. When the new iPad and iPhone5 are used, the delay time are tabulated

Video Resolu- Connected Delay Camera/Viewer


tion units
192×144 1 0 sec. 1 iPhone5/1 new iPad
192×144 2 0 sec. 2 iPhone5/1 new iPad
192×144 3 0 sec. 3 iPhone5/1 new iPad
192×144 4 0 sec. 4 iPhone5/1 new iPad
352×288 1 0 sec. 1 iPhone5/1 new iPad
352×288 2 0 sec. 2 iPhone5/1 new iPad
352×288 3 0 sec. 3 iPhone5/1 new iPad
352×288 4 0 sec. 4 iPhone5/1 new iPad
480×360 1 0 sec. 1 iPhone5/1 new iPad
480×360 2 0 sec. 2 iPhone5/1 new iPad
480×360 3 5 sec. 3 iPhone5/1 new iPad
480×360 4 10 sec. 4 iPhone5/1 new iPad

5 Conclusion
In this study, Apple mobile devices with the proposed APP can be used as CAM and
viewer. The proposed APP is easily extended to the house security (monitor) and
elder/baby care. However, the delay situation of the proposed APP is appeared when
the older devices are used and the network bandwidth is insufficient. In this study, we
are also provides some of experiment results for iPhone as multi-CAM and multi-
viewer. In the future, the video stream technology is used to overcome the delay
situation.
24 C.-C. Chuang et al.

Acknowledgement. This work was supported by National Science Council under


Grant NSC 101-2221-E-197-016-MY3.

References
[1] Mark, D.: Begignning iPhone 4 Development: Exploring the iOS SDK. Springer-Verlag
New York Inc. (January 31, 2011)
[2] Apple Dev Center,
https://developer.apple.com/devcenter/ios/index.action
[3] iOS, http://en.wikipedia.org/wiki/IOS
[4] Dong, J.Z.: Construct of Cell Phone Global Positioning System with Software Model.
Department of Electrical Engineering National Ilan University Master Thesis (2008)
[5] GPS, http://en.wikipedia.org/wiki/Global_Positioning_System
[6] Rousseeuw, P.J., Leroy, M.A.: Robust Regression and Outlier Detection. Wiley (1987)
[7] SDK, http://zh.wikipedia.org/wiki/SDK
[8] Xcode (2012), http://en.wikipedia.org/wiki/Xcode
[9] Kochan, S.G.: Programming in Objective-C. Addison-Wesley (June 10, 2011)
[10] Ash, M.: Pro Objective-c for MAC and Iphone. Springer-Verlag New York Inc. (March
30, 2012)
[11] Devoe, J.: Objective-C. John Wiley & Sons Inc. (2011)
Robust Gaussian Kernel Based Approach
for Feature Selection

Chih-Ching Hsiao1, Chen-Chia Chuang2, and Shun-Feng Su3


1
Department of Information Technology, Kao Yuan University, Taiwan
cchsiao@cc.kyu.edu.tw
2
Department of Electrical Engineering, National Ilan University, Taiwan
ccchuang@niu.edu.tw
3
Department of Electrical Engineering,
National Taiwan University of Science and Technology, Taiwan
sfsu@mail.ntust.edu.tw

Abstract. The outlier problem of feature selection is rarely discussed in the


most previous works. Moreover, there are no work has been reported in litera-
ture on symbolic interval feature selection in the supervised framework. In this
paper, we will incorporate similarity margin concept and Gaussian kernel fuzzy
rough sets to deal with the Symbolic Data Selection problem and it is also an
optimizing problem. The advantage of this approach is it can easily introduce
loss function and with robustness.

Keywords: outlier, feature selection, interval feature, symbolic data selection.

1 Introduction
Current database systems become more and more complex and, more and more mas-
sive data are stored in them, therefore, finding valuable information from such data-
bases becomes a hard work. To take into account the usually inherent uncertainty to
the measure devices, or to reduce large datasets, the interval representation of data has
seen widespread use in recent years. Recently, a clustering model for interval data is
suggested by a number of researchers in terms of Symbolic Data Analysis (SDA) [1].
Symbolic Data Analysis is a new domain related to multivariate analysis, pattern
recognition and artificial intelligence, many research works have been interested to
extend classical exploratory data analysis and statistical methods to symbolic data.
Indeed, in the SDA framework, symbolic objects are extensions of classical data
types, in the way that in the case of symbolic interval data, each variable may take an
interval of values instead of a single value [2,3].
Feature selection (also called attribute reduction) methods seek to choose a small
subset of features that ideally is necessary and sufficient to describe the target con-
cept. It is a common technique used in data preprocessing for pattern recognition,
machine learning, rule extraction and data mining, has attracted much attention in
recent years [4-7]. In recent years, both the number and dimensionality of items in
datasets have grown dramatically for some real-world applications. It is well known

Y.S. Kim et al. (eds.), Advanced Intelligent Systems, 25


Advances in Intelligent Systems and Computing 268,
DOI: 10.1007/978-3-319-05500-8_4, © Springer International Publishing Switzerland 2014
26 C.-C. Hsiao, C.-C. Chuang, and S.-F. Su

that an excessive amount of features may cause a significant slowdown in the learning
process, and may increase the risk of the learned classifier to over-fit the training data
because irrelevant, redundant or outlier features confuse learning algorithms [5]. To
address this issue, as pointed out in [8], some attributes can be omitted, which will not
seriously affect the resulting classification accuracy. Though the concept of symbolic
data has been studied extensively in clustering, its inherent capabilities in the problem
of Symbolic Data Selection (SDS) have not been sufficiently explored[24].
The rough set theory proposed by Pawlak [9] is a mathematical theory dealing with
uncertainty in data. The concepts of attributes reduction and rule extraction can be
viewed as the strongest and most important results in rough sets theory to distinguish
itself from other theories. There are many models have been proposed for generaliz-
ing rough sets to the fuzzy environment [10-12]. Consequently, the formal concept of
attribute reduction with fuzzy rough sets[13,14] and a generalized interval type-2
fuzzy rough set[15] have been proposed. In [16], the authors incorporate Gaussian
kernel with fuzzy rough sets and proposed a Gaussian kernel approximation based
fuzzy rough set model. Consequently, the authors introduce parameterized attribute
reduction with the derived model of fuzzy rough sets [17]. Ma [18] introduced
weights into the variable precision rough set model to represent the importance of
each sample, and discussed the influence of weights on attribute reduction. In [19],
the authors introduce weights into rough set model to balance the class distribution of
a data set and develop a weighted rough set based method to deal with the class im-
balance problem. Aiming at efficient feature selection, many heuristic attribute reduc-
tion algorithms have been developed in rough set theory [20,21]. Each of these
algorithms preserves a particular property of a given information system. As a genera-
lization of fuzzy set, the notion of interval-valued fuzzy sets was proposed in [22].
The interval-valued membership is easier to be determined than the single-valued one.
Due to the complementarity between interval-valued fuzzy sets and rough sets, inter-
val-valued rough fuzzy sets that combined interval-value fuzzy set with rough set was
proposed [23], a method of knowledge discovery was presented subsequently for
interval-valued fuzzy information systems. The concept of fuzzy rough sets is genera-
lized to interval type-2(IT2) fuzzy environments and an IT2 fuzzy-rough QuickRe-
duct algorithm is proposed [16]. In [24], a supervised framework is proposed based on
similarity margin for SDS problem. In this method, a similarity measure is defined in
order to estimate the similarity between the interval feature value and each class pro-
totype. Then, the heuristic search is avoided by optimizing an objective function to
evaluate the importance of each interval feature in a similarity margin framework.
The outlier problem of feature selection is rarely discussed in the most previous
works. Moreover, there are no work has been reported in literature on symbolic inter-
val feature selection in the supervised framework [24]. Hedjazi [24] proposes a fea-
ture selection method for symbolic interval data based on similarity margin. In this
method, classes are parameterized by an interval prototype based on an appropriate
learning process. A similarity measure is defined in order to estimate the similarity
between the interval feature value and each class prototype. Then, a similarity margin
concept has been introduced. The heuristic search is avoided by optimizing an objec-
tive function to evaluate the importance of each interval feature in a similarity margin
Robust Gaussian Kernel Based Approach for Feature Selection 27

framework. In [20], the authors incorporate Gaussian kernel with fuzzy rough sets and
proposed a Gaussian kernel approximation based fuzzy rough set model. Consequent-
ly, the authors introduce parameterized attribute reduction with the derived model of
fuzzy rough sets [20]. The basic idea is the similarity between two samples is com-
puted with Gaussian kernel function. Therefore Gaussian kernel induces a fuzzy rela-
tion satisfying the properties of reflexivity and symmetry. Moreover, it can introduce
Gaussian kernel for computing fuzzy T-equivalence relations in fuzzy rough sets and
thus approximate arbitrary fuzzy subsets with kernel induced fuzzy granules. In this
paper, we will incorporate similarity margin concept and Gaussian kernel fuzzy rough
sets to deal with the SDS problem and it is also an optimizing problem. The advan-
tage of this approach is it can easily introduce loss function and with robustness. Such
an approach is called Robust Gaussian kernel based feature selection algorithm.
The remaining part of the paper is outlined as follows. Section 2 describes the
fundamentals of similarity margin concept and Gaussian kernel fuzzy rough sets. In
Section 3, the Robust Gaussian kernel based feature selection algorithm is proposed.
Experimental evaluation is presented in Section 4. Concluding remarks are presented
in section 5.

2 Related Works

The outlier problem of feature selection is rarely discussed in the most previous
works. We will incorporate similarity margin concept and Gaussian kernel fuzzy
rough sets to deal with the SDS problem and it is also an optimizing problem. The
above theories or concepts are briefly stated as follows.

2.1 Similarity-Margin Based Feature Selection Algorithm[24]

Hedjazi [24] proposes a feature selection method for symbolic interval data based on
similarity margin. In this method, classes are parameterized by an interval prototype
based on an appropriate learning process. A similarity measure is defined in order to
estimate the similarity between the interval feature value and each class prototype.
Then, a similarity margin concept has been introduced. The heuristic search is
avoided by optimizing an objective function to evaluate the importance of each inter-
val feature in a similarity margin framework.
Given two intervals A = [a L , aU ] and B = [b L , bU ] , a similarity measure is de-
fined by

1  ϖ [ A ∩ B] ∂[ A, B ] 
S ( A, B ) =  +1−  (1)
2  ϖ [ A ∪ B] ∂[U ] 

where ∂[ A, B ] = max[0, (max[a L − b L ] − min[aU − bU ])] :the distance between two


intervals. ϖ [ x ] = xU − x L : the length measure of an interval x.
28 C.-C. Hsiao, C.-C. Chuang, and S.-F. Su

This similarity measure is constituted by two terms. The first term corresponds to
the well-known Jaccard’s similarity measure which computes the similarity when two
intervals overlap, the second term which allows to take into account the similarity
when the two intervals do not overlap.
Assume that the n-th data sample xn = [ x1n , xn2 ,..., xnm ] in dataset is labeled by the
class c. The class k prototype is ρ k = [ ρ k1 , ρ k2 ,..., ρ km ] . The weighted similarity mar-
gin for sample xn is defined as

ϑn = min [ϕ ( Γ nc ) − ϕ ( Γ nc )] (2)
{c ∈C , c ≠ c ( xn )}

1 m
ϕ ( Γ nc ) − ϕ ( Γ nc ) = αi ⋅ ( S ( xni , ρ ci ) −S ( xni , ρ ci ))
m i =1
(3)
1 m
=  αi ⋅ Eni
m i =1

where c is a complement class of c. Γ nc = [ S ( x1n , ρ c1 ), S ( xn2 , ρ c2 ),..., S ( xnm , ρ cm )]T ,


Γ nc = [ S ( xn1 , ρ c1 ), S ( xn2 , ρ c2 ),..., S ( xnm , ρ cm )]T are respectively the similarity vector of
the sample xn to the class c, and to its complement class. Eni = S ( xni , ρ ci ) − S ( xni , ρ ci )
is the i-th similarity error in the n-th data sample. α = [α1 , α 2 ,..., α m ] is the interval
feature weight vector, expresses the relative degree of usefulness of each interval
feature for the discrimination between the two classes. A natural idea to calculate the
weight vector α is to minimize the margin-based error function in Equ.(1). The
problem of estimation of the nonnegative vector can be transformed into the follow-
ing optimization problem:
N
Min[ χ (ϑn ) < 0)] (4)
α
n =1

where χ (⋅) is the indicator function. The classical Lagrangian optimization method
can be used to solve the above problem which can be rewritten as

1 N  m

Max
α
 
m n =1 
min (  αi ⋅ Eni )  (5)
{ c ∈C , c ≠ c ( xn )}
i =1 

Subject to
α = 1,
2

α ≥0
The first constraint is the normalized bound for the modulus of so that the maximi-
zation ends up with non-infinite values, whereas the second guarantees the nonnega-
tive property of the obtained weight vector.
Robust Gaussian Kernel Based Approach for Feature Selection 29

2.2 Gaussian Kernel Based Fuzzy Rough Sets[16,17]


In [16,17], the authors incorporate Gaussian kernel with fuzzy rough sets and pro-
posed a Gaussian kernel approximation based fuzzy rough set model. The basic idea
is the similarity between two samples is computed with Gaussian kernel function.
Therefore Gaussian kernel induces a fuzzy relation satisfying the properties of reflex-
ivity and symmetry. Moreover, it can introduce Gaussian kernel for computing fuzzy
T-equivalence relations in fuzzy rough sets and thus approximate arbitrary fuzzy sub-
sets with kernel induced fuzzy granules.
The similarity between two samples is computed with Gaussian kernel function
2
k ( xi , x j ) = exp( − xi − x j / 2δ 2 ) (6)

where xi − x j is the Euclidean distance between samples xi and x j . There-


fore Gaussian kernel induces a fuzzy relation satisfying the properties of reflexivity
and symmetry.

3 Robust Gaussian Kernel Based Feature Selection Algorithm

In this paper, the Robust Gaussian kernel based feature selection algorithm for sym-
bolic interval-value data with outlier is proposed. We incorporate similarity margin
concept and Gaussian kernel fuzzy rough sets to deal with the SDS problem. The
similarity-margin based approach [24] that can reduce the irrelevant features, but not
redundant features, because its similarity measure cannot be flexible choice. In other
word, the redundant features and weakly relevant feature will cause the important
feature cannot be suitable selected. Moreover, if the intervals contain outlier may
result in bad feature selections. Thus, we also introduce the loss function into the
proposed approach.
Given two intervals A = [a L , aU ] and B = [b L , bU ] , the Gaussian kernel simi-
larity measure is defined as

SG ( A, B ) = exp( −∂ ( A, B )2 / 2δ 2 ) (7)

where ∂ ( A, B ) is the distance measure between given two intervals. Thus, the simi-
lar error can be rewritten as

Eni = SG ( xni , ρ ci ) − SG ( xni , ρ ci ) (8)

At the same time, there are two parameters and can be used to control the gaussian
kernel similarity measure, it is more flexible. The distance measure is considered as
the Hausdorff-like distance and defined as

{
∂ ( A,B ) = max a L − b L , aU − bU }. (9)
30 C.-C. Hsiao, C.-C. Chuang, and S.-F. Su

We incorporate the gaussian kernel similarity measure and loss function, the object
function in Equ.(7) can be rewritten as

1 N  m

Max
α
 
m n =1 
min  ψ (αi ⋅ Eni )  (10)
{ c ∈C , c ≠ c ( xn )}
i =1 

Subject to

α = 1,
2

α ≥0

where ψ (⋅) is a robust loss function, Eni is similarity error defined in Equ.(8).
The Lagrange multiplier method is applied and we obtain the closed form result for
weighted vector α

γ+
α= (11)
γ+

1 N
where γ =  min {ΔΓn } ,
m n =1 {c ∈C ,c ≠ c ( xn )}
ΔΓ nc = [ψ '(α1 ⋅ En1 ) ⋅ En1 ,ψ '(α 2 ⋅ En2 ) ⋅ En2 ,...,ψ '(α m ⋅ Enm ) ⋅ Enm ]T
γ + = [max(γ 1 ,0), , max(γ m ,0)]

The proposed method can be summarized by the following algorithm:

Step 1: Initiate the weight vector w to zero, number of iterations chosen as t⋅N ,
t is an integer.
Step 2: Calculate the parameters of each class with respect to dataset.
Step 3: Select randomly a sample xn from dataset.
Step 4: Calculate the Similarity Vectors for the sample xn for each class.
Step 5: Update the vector γ as

1 N
γ =γ +  min {ΔΓ n }
m n =1 {c ∈C ,c ≠ c ( xn )}

Step 6: Estimate the weight vector as

γ+
α= with γ + = [max(γ 1 ,0), , max(γ m ,0)]
γ+

Step 7: If the weight vector α is remain unchanged, then END; otherwise go to


Step 3.
Robust Gaussian Kernel Based Approach for Feature Selection 31

4 Experimental Evaluation

The dataset concerns Barcelona’s water distribution network [25], which describes
one year of daily water flow grouped into two groups according to the day type:
weekends or workdays. The dataset can be found online at the link http://lhedjazi.
jimdo.com/useful-links [24]. In this dataset, each day is characterized by 48 interval
features. It contains only 316 days data, over 365 in the year, because days with false
or missing measurement data were discarded. However, the false data can be consi-
dered as outlier. In this paper, we artificially added 30 days with false data into the
dataset. The LAMDA method [24] is applied to evaluate the classification error that is
a function of the number of the top ranked features. The result shows that also the top
ranked 10 interval features yields the smallest classification error. It proves that the
proposed approach can deal with the outlier’s problem for interval data in a large
dataset. Table 1 shows the result of interval feature selection.

Table 1. Result of interval feature selection

Smallest Selected Selected


Water data-
classification feature’s interval
set
error number feature

IF11 IF20 、

IF23 IF30 、

IF31 IF32 、
Hedjazi [24] 0.23 11

IF33 IF34 、

IF37 IF39 、
IF41

IF23 IF30 、

IF31 IF32 、
Proposed
0.21 10 、
IF33 IF34 、
Approach

IF37 IF39 、

IF41 IF43

5 Conclusion

In this paper, the “Robust Gaussian kernel based feature selection algorithm” for
symbolic interval-value data with outlier is proposed. This way incorporates similarity
margin concept and Gaussian kernel fuzzy rough sets to deal with the SDS problem.
The advantage of this approach is it can easily introduce loss function and with ro-
bustness. The experimental evaluation was performed on Water dataset with false
data. It proves that the proposed approach can deal with the outlier’s problem for
interval data in a large dataset.

Acknowledgments. This work was supported in part by National Science Council of


Taiwan under Grant NSC 102-2221-E-244 -015 -.
32 C.-C. Hsiao, C.-C. Chuang, and S.-F. Su

References
1. Diday, E., Esposito, F.: An Introduction to Symbolic Data Analysis and the SODAS soft-
ware. Intelligent Data Analysis 7, 583–601 (2003)
2. Gowda, K.C., Diday, E.: Symbolic clustering using a new similarity measure. IEEE Trans.
Systems Man Cyber. 22, 368–378 (1992)
3. Guoa, J., Li, W., Li, C., Gaoa, S.: Standardization of interval symbolic data based on the
empirical descriptive statistics. Computational Statistics and Data Analysis 56, 602–610
(2012)
4. Dash, M., Liu, H.: Consistency-based search in feature selection. Artificial Intelli-
gence 151, 155–176 (2003)
5. Yu, L., Liu, H.: Efficient feature selection via analysis of relevance and redundancy. Jour-
nal of Machine Learning Research 5, 1205–1224 (2004)
6. Zhu, Z.X., Ong, Y.S., Dash, M.: Wrapper-filter feature selection algorithm using a memet-
ic framework. IEEE Trans. on SMC Part B 37, 70–76 (2007)
7. Wanga, F., Lianga, J., Dangb, C.: Attribute reduction for dynamic data sets. Applied Soft
Computing 13, 676–689 (2013)
8. Hu, Q.H., Xie, Z.X., Yu, D.R.: Hybrid attribute reduction based on a novel fuzzy- rough
model and information granulation. Pattern Recognition 40, 3509–3521 (2007)
9. Pawlak, Z.: Rough sets. Int. J. Coinput. Inf. Sci. 11, 341–356 (1982)
10. Chen, D.G., Wang, X.Z., Yeung, D.S., Tsang, E.C.C.: Rough approximations on a com-
plete completely distributive lattice with applications to generalized rough sets. Informa-
tion Sciences 176, 1829–1848 (2006)
11. Yeung, D.S., Chen, D.G., Tsang, E.C.C., Lee, J.W.T., Wang, X.Z.: On the generalization
of fuzzy rough sets. IEEE Transactions on Fuzzy Systems 13, 343–361 (2005)
12. Jensen, R., Shen, Q.: Fuzzy-rough attributes reduction with application to web categoriza-
tion. Fuzzy Sets and Systems 141, 469–485 (2004)
13. Tsang, E.C.C., Chen, D.G., Yeung, D.S., Wang, X.Z., Lee, J.W.T.: Attributes reduction
us-ing fuzzy rough sets. IEEE Transaction on Fuzzy System 16(5), 1130–1141 (2008)
14. Jensen, R., Shen, Q.: New Approaches to Fuzzy-Rough Feature Selection. IEEE Transac-
tion on Fuzzy System 17(4), 814–838 (2009)
15. Wu, H., Wu, Y., Luo, J.: An Interval Type-2 Fuzzy Rough Set Model for Attribute Reduc-
tion. IEEE Trans. Fuzz. Sys. 17(2) (2009)
16. Hua, Q., Zhang, L., Chen, D., Pedrycz, W., Yu, D.: Gaussian kernel based fuzzy rough
sets: Model, uncertainty measures and applications. International Journal of Approximate
Reasoning 51, 453–471 (2010)
17. Chen, D., Hu, Q., Yang, Y.: Parameterized attribute reduction with Gaussian kernel based
fuzzy rough sets. Information Sciences 181, 5169–5179 (2011)
18. Ma, T.-H., Tang, M.-L.: Weighted rough set model. In: Int. Conf. Intelligent Systems De-
sign and Applications, pp. 481–485 (2006)
19. Liu, J., Hu, Q., Yu, D.: A weighted rough set based method developed for class imbalance
learning. Information Sciences 178, 1235–1256 (2008)
20. Liang, J.Y., Chin, K.S., Dang, C.Y., Yam Richid, C.M.: A new method for measuring un-
certainty and fuzziness in rough set theory. International Journal of General Systems 31(4),
331–342 (2002)
21. Hu, Q.H., Xie, Z.X., Yu, D.R.: Hybrid attribute reduction based on a novel fuzzy- rough
model and information granulation. Pattern Recognition 40, 3509–3521 (2007)
22. Gorzalczany, B.: Interval-valued fuzzy controller based on verbal modal of object. Fuzzy
Sets and Systems 28, 45–53 (1988)
Robust Gaussian Kernel Based Approach for Feature Selection 33

23. Gong, Z.T., Sun, B.Z., Chen, D.G.: Rough set theory for the interval-valued fuzzy in-
formation systems. Information Sciences 178, 1968–1985 (2008)
24. Hedjazi, L., Aguilar-Martin, J., Le Lann, M.-V.: Similarity-margin based feature selection
for symbolic interval data. Pattern Recognition Letters 32, 578–585 (2011)
25. Quevedo, J., Puig, V., Cembrano, G., Blanch, J., Aguilar, J., Saporta, D., Benito, G., He-
do, M., Molina, A.: Validation and reconstruction of flow meter data in the Barcelona wa-
ter distribution network. J. Control Eng. Practice 18, 640–651 (2010)
Multi Human Movement Trajectory Extraction
by Thermal Sensor

Masato Kuki1, Hiroshi Nakajima2, Naoki Tsuchiya2,


Junichi Tanaka2, and Yutaka Hata1,3
1
Graduate School of Engineering, University of Hyogo, Himeji, Japan
kukim@ieee.org
2
Technology and Intellectual Property H.Q., Omron Corporation, Kizugawa, Japan
3
WPI Immunology Frontier Research Center, Osaka University, Osaka, Japan

Abstract. This paper proposes a multi human movement trajectories (HMTs)


extraction system with room layout estimation by a thermal sensor. In the
system, the sensor is attached to the ceiling and it acquires 16 × 16 elements
spatial temperatures – thermal distribution. The distributions are analyzed to
extract HMTs. Firstly, room temperature is removed from thermal distribution.
Secondly, human distribution is estimated with fuzzy inferences. In this
procedure, an O-F (Object-Floor) map is employed to prevent miss detection of
human positions based on room layout. Finally, multi HMTs are extracted by
the Connected Component Labeling and the ordering by the distance between
new acquired human position and past HMTs. In the experiment, we measured
a room to evaluate detection ability of our system. As the experimental result,
the system successfully extracted multi HMTs in the all data.

Keywords: Daily monitoring system, Thermal sensor, Thermal distribution,


Multi-human location, Human detection, Human movement trajectory.

1 Introduction

In Japan, elderly home alone are increasing. Their household was 3.87 millions in
2005. On the other hand, it will be 7.17 millions in 2030 [1]. Thus, unaware lonely
death will be a big deal. Its major reason is said to be isolation from any community
or relation. It was also arisen from mental disease such as PTSD on temporary
housing when the Hansin disaster in Japan [2], [3]. To prevent these issues, long-term
care and assistance are needed. A solution is that staff members in special institutes
and volunteers visit then regularly for them. However, this work is limited by staff
numbers. Moreover, visiting time is limited. For these reasons, daily monitoring
systems are needed.
In indoor monitoring systems, optical camera and depth sensor are generally
employed [4], [5]. For example, those sensors capture human motion and posture.
However, it invades their privacy because those sensors have enough resolution to
capture their face and movement [6], [7]. In addition, existence of those sensors also
cause psychological effects because they feel that they are watched their daily life and

Y.S. Kim et al. (eds.), Advanced Intelligent Systems, 35


Advances in Intelligent Systems and Computing 268,
DOI: 10.1007/978-3-319-05500-8_5, © Springer International Publishing Switzerland 2014
36 M. Kuki et al.

know those sensor captures faces in detail. Thus, those sensors are not suitable for
daily monitoring. In this study, we employ an infrared ray thermal sensor. It is one of
passive infrared sensor, which detects objects by receiving its emitted infrared ray.
This sensor is able to measure wide space simultaneously and specialized to detect
heat source [8], [9]. Because a sensor is low resolution such as 4×4 and 8×8 it cannot
acquire human face and behavior. Therefore, it does not constrain them in mental
aspect. From the reason, the sensor solves privacy problem and does not cause
psychological effect. In this sensor, human positions are extracted as high temperature
area. Thus, this sensor is suitable for human detection.
In our previous work [10], [11], we employed thermal sensor and constructed a
system which estimates multi human positions. Our previous system detected humans
by fuzzy inference and calculated human positions by Connected Component
Labeling. In the experiment, we employed several subjects in a room with object and
heat source, such as large table, chair, and heater. Then, the system successfully
recorded accurate their positions and estimated the number of subjects. However, the
method is not able to extract HMTs (Human movement trajectories) because it does
not associate humans by positions. To detect abnormal movement, daily HMT is
useful and efficient. For home alone elderly, their family, acquaintances and others
often visit them. Therefore, a system which extracts multi HMTs is needed.
This paper proposes a multi human movement trajectories (HMTs) extraction
method for a room with objects and heat sources by 16 × 16 thermal sensor. In our
system, the sensor is attached to the ceiling and it acquires thermal distribution
regularly. The thermal distribution is defined as 2-dimentional temperature values. In
our method, the system calculated thermal difference distribution (TDD) by removing
room temperature from the distribution. Next, the system estimates humans from
TDD by fuzzy inferences with O-F map, which has room layout information. Here,
O-F Map is calculated in preprocess from TDD. And then, human positions are
calculated by the Connected Component Labeling. Finally, multi HMTs are extracted
by associating calculated human positions and past HMTs by minimizing their
distance. In the experiment, we measured a room to evaluate detection ability of our
system. As the result, the system extracted multi HMTs successfully.

2 Preliminaries
In this section, we describe a measurement system. We employ the 16 × 16 infrared
ray array sensor module (Omron Corporation) which measure spatial temperature.
This sensor outputs temperatures from 273 [K] to 323 [K] with 16 [bits] gradations.
The sensor acquires a thermal distribution, which consists of two-dimensional 16 × 16
temperature values. This sensor is shown in figure 1. Figure. 2(left) shows an example
of thermal distribution from the sensor, and figure 2(right) shows an optical image
captured by a camera at the same time. In our system, the sensor is attached to the
ceiling as shown in figure 3(left). For example, when the sensor is attached to the 3.0
[m] height, sensing area becomes 2.6 [m2] horizontal areas at 1.7 [m] height as shown
in figure 3(right). Here, 1.7 [m] based on the human height in Japan [12].
In our system, the sensor is attached to the ceiling, and then thermal distributions
are provided to a personal computer to extract multi human movement trajectories.
Multi Human Movement Trajectory Extraction by Thermal Sensor 37

37(mm)
20(mm)

11(mm)

Fig. 1. The thermal sensor

0 … 15 x
0 303 K
Temperature [K]

15 295 K
y

Fig. 2. An example of thermal distribution(left) and optical image(right)

Fig. 3. The sensing area of the system in side view(left) and top view(right)

3 Proposed Method

Our approach is shown in figure 4. Firstly, the sensor acquires a thermal distribution.
Secondly, the system removes room temperature from the thermal distribution as
shown in figure 5. Thirdly, the system extracts human distributions from thermal
difference distributions by fuzzy inferences and O-F map [13]. Here, O-F (Object-
Floor) map is calculated in preprocess. Fourthly, human positions are calculated by
the Connected Component Labeling. Finally, human movement trajectories (HMTs)
are extracted from human positions by associating the positions and past HMTs with
minimizing their distance.
38 M. Kuki et al.

Learning(Preprocess)
Nl
max max – 5%

Subtract 5% Floor
Fuzzy
5% inference
Object
min 0
Thermal Thermal O-F Map
distribution difference distribution
Merge
Measurement
max max – 5%

Subtract 5% Fuzzy Extract


5% inference HMTs
min 0
Thermal Thermal Human Extracted HMTs
distribution difference distribution distribution

Fig. 4. The approach

3.1 Room Temperature Removal

In thermal distribution, it varies by daily temperature variation, sunshine and


cooling/heating facilities. In the sensor, temperature of the distribution becomes sum
of environment temperature and temperature which objects radiate. Environment
temperature is corresponding with room temperature and be affected by those. Thus,
to remove effect of room temperature, the system subtracts room temperature from
the distribution to make a thermal difference distribution (TDD) Tt(x, y). Here, the
notation t denotes a time of data. The notation x denotes element index of x axis and y
denotes it of y axis shown in figure 2(a). TDD is a relative thermal distribution from
room temperature as shown in figure 6. The room temperature is able to approximate
as temperature of floor, which is near to minimum temperature of the distribution.
Therefore, it can be candidate of room temperature. However, the sensor sometimes
acquires outlier. Therefore, to remove outlier, we employ five percentile value of the
distribution as the room temperature.

7.5 K
Temperature [K]
Differential

0.0 K

Fig. 5. The TDD for figure 2(left)


Multi Human Movement Trajectory Extraction by Thermal Sensor 39

3.2 Fuzzy Human Detection

In this procedure, the system calculates human distribution by fuzzy inference from
the TDD. In our conventional method, our system miss-detected heat source as a
human. For example, when a human leaved from his seat, the seat was detected as a
human. It is because heat emitted by him/her makes the seat hot [10], [11]. Thus, we
employ knowledge about temperature and human movement to distinguish floor,
object, heat source and human.
Knowledge 1: Heat source, human, object and floor have unique temperature level.
Knowledge 2: Human has width and depth.
Knowledge 3: When human came in, then temperature increases.
Knowledge 4: When human went out, then temperature decreases.
Knowledge 5: Human positions in time series are continuous.
From knowledge, the following fuzzy IF-THEN rules are derived.
Rule 1: IF temperature T is close to TPm, THEN fuzzy degree μPT is high.
Rule 2: IF temperature T is larger than THh, THEN fuzzy degree μHT is high.
Rule 3: IF temperature T is close to TOm, THEN fuzzy degree μOT is high.
Rule 4: IF temperature T is smaller than TFh, THEN fuzzy degree μFT is high.
Rule 5: IF mean absolute error eT against the Gaussian distribution G(x, y) is
smaller than eTl, THEN fuzzy degree μWIDTH is high.
Rule 6: IF the differential temperature ΔT is larger than ΔTIh, THEN fuzzy degree
μIN is high.
Rule 7: IF the differential temperature ΔT is smaller than ΔTOl, THEN fuzzy
degree μOUT is low.
Here, notation T denotes a temperature value of each TDD element. ΔT denotes mean
difference of T calculated from previous 3 samples (sample t-2, t-1 and t). Mean
absolute error eT and the Gaussian distribution G(x, y) are calculated by (1) and (2),
respectively.
2
WH WH
 T ( x + h, y + v) 
 
1
eT ( x, y ) =  − G (h, v)  (1)
WH 2 W W  T ( x, y ) 
h =− H v =− H
2 2

 x2 + y2 
G( x, y ) = exp  −  (2)
 2 (W / 4 ) 2 
 H 

In (1), WH denotes the width of the distribution corresponding with human width. μPT
denotes fuzzy degree of the element is human, μHT denotes it of the element is heat
source, μOT denotes it of the element is object and μFT denotes it of the element is
floor. μIN denotes it of the element is a human coming in and μOUT denotes it of the
element is a human going out.
For rule 1 to 4, fuzzy membership functions PERSON, HEAT, OBJECT and
FLOOR are defined as figure 6 (a). For rule 5, fuzzy membership functions WIDTH
are defined as figure 6 (b). For rule 6 and 7, fuzzy membership functions IN and OUT
40 M. Kuki et al.

are defined as figure 6 (c). Each degree about human is calculated by below
equations.

(
μ PT = min STx (T ) , PERSON ) (3)

(
μWIDTH = min SeT x ( eT ) ,WIDTH ) (4)

(
μ IN = min SΔTx ( ΔT ) , IN ) (5)

(
μ OUT = min SΔTx ( ΔT ) , OUT ) (6)

Here, the fuzzy singleton function Sα(β) is defined by (7).

1 if β = α
Sα ( β ) =  (7)
0 otherwise
Likelihood LPt(x, y) that element is human, is calculated by (8).

(
LPt ( x , y ) = μ PT 1 − μ OUT )( μ IN
+ μ WIDTH + LtP−1 ( x , y ) ) (8)

Here, initial value of LPt(x, y) is set in 0. In addition, most noise consists of single
element. Thus, to remove those components, the system apply 3 × 3 median filter to
LPt. From LPt(x, y) and O-F Map OFt(x, y), the system calculates human distribution
HDt(x, y). Here, O-F Map represents room layout by object as ‘O’ and floor as ‘F’. Its
detail is described in the next section. If OFt(x, y) = ‘O’, HDt(x, y) becomes 0.
Otherwise, HDt(x, y) is determined either of 0(Background) or 1(Human) by (9).

 1 if LPt ( x , y ) > 0
HDt ( x , y ) =  (9)
 0 otherwise

(b) Rule 5 : mean (c) Rule 6 and 7 :


(a) Rule 1-4 : temperature absolute error of T differential temperature

Fig. 6. Fuzzy membership functions for human detection


Multi Human Movement Trajectory Extraction by Thermal Sensor 41

3.3 Room Layout Estimation

As preprocess, the system calculates O-F (Object-Floor) Map. O-F Map represents
room layout by object as ‘O’ and floor as ‘F’. O-F Map is employed to prevent that
object such as table, bookshelf is miss detected as a human area. O-F Map is
determined by fuzzy inference. Firstly, fuzzy degree μPT and others about each kind of
element μtXT are calculated by below equations.

(
μ HT = min STx (T ) , HEAT ) (10)

μ OT = min ( STx (T ) , OBJECT ) (11)

μ FT = min ( STx (T ) , FLOOR ) (12)

Thirdly, O-F Map OFt(x, y) in sample t is calculated by (13).

 ' X ' if μ PT ( x , y ) > μ OT ( x , y )



 μ OT ( x , y ) >= μ FT ( x , y ) (13)
OFt ( x, y ) =  ' O ' if HT
 μ ( x , y ) > μ PT ( x, y )
'F ' otherwise

Fourthly, if OFt (x, y) = ‘X’, its element is determined either of major ‘O’ or ‘F’ in
surrounded elements. Finally, O-F Map OF(x, y) is determined as mode value of
learning samples Nl.

3.4 Human Position Extraction

In this procedure, the system applies the Connected Component Labeling to the human
distribution and calculates center of gravity (COG) of each label to extract human
positions. Firstly, we apply 8-neighborhood Connected Component Labeling to a
human distribution. In this procedure, 8-connected component is defined as a label, and
labels have unique natural numbers to distinguish each other. Each label number 1, 2,
… , k corresponds to a human. The label number ‘0’ represents background.
Next, COG of each label is calculated by (14).
N −1 N −1

x i y i
xc = i =0
, yc = i =0
(14)
NL NL

Here, the notations xc and yc denote 2-dimensional COG of a label, the notations xi and
yi denote each pixel position and the notation NL denotes the sum of pixels constructing
a label.
42 M. Kuki et al.

3.5 Multi Human Movement Trajectories Extraction

In this procedure, the system extracts multi human movement trajectories (HMTs)
from COGs of labels and past HMTs. In this study, HMT is defined as trajectory of
label centroids and it is recorded regularly. Human movement trajectory is recorded by
a polygonal line in time series. The current position of HMT is displayed by a circle.

A. Calculating Distance
The system calculates distance between extracted position and past HMTs. Then,
a. When human is stopping (The determinant of HMT covariance matrix is 0), the
system calculates the Euclidean distance between the position and the COG of
the HMT.
b. Otherwise, human is walking straight in most case. Therefore, the system
calculates the Mahalanobis distance between the position and the HMT.
Thus, the distance Dij between the i-th extracted position and the j-th past HMT is
calculated as below.


(P − μ j ) Σ−j 1 ( Ppi − μ j ) if det Σ j ≠ 0
T
 (15)
Dij = 
pi

 Ppi − Poj Otherwise



In (15), Ppi=(xpi, ypj) denotes a i-th extracted position, Poj =(xoj1 …xojM, yoj1 …yojM)
denotes the j-th past HMT. μj, Σj denote the average matrix and the covariance matrix
of Poj, respectively. figure 7 shows the concept of this step. In this figure, polygonal
line with points represents HMT, black points represent new position, thick lines
represent the distance and black thin lines shows normal distribution of the average
matrix and corvariance matrix. Thickness of thick lines are determined depend on
ascending distance order. Black crosses represent the average matrix of each HMTs.

Fig. 7. Start(left) and calculating distances between positions and past HMTs (right)
Multi Human Movement Trajectory Extraction by Thermal Sensor 43

B. Attributing New Position to HMTs by the Distance

The system associates extracted human position to past HMTs by minimizing


distances. figure 8 shows the concept of this step. In the case of multi positions have
minimum distance to the same HMT, they would be associated to the same HMT.
Then, the system selects the position which have smaller distance.
Next, if the number of the position is larger than the number of new extracted
HMTs, then extra positions becomes new HMTs. Otherwise, extra past HMTs
becomes non available(N/A). Here, moving velocity of human is limited and therefore
they does not change their positions dramatically. Thus, if the D is larger than the
threshold Dmax, then the position is associated to new HMT. Otherwise, it is associated
to the nearest HMT. Here, Dmax is calculated by (16).

vave (16)
Dmax = [elements ]
Fs
In (16), Dmax denotes the maximum distance which human is able to move in unit
time. vave denotes average velocity of human and Fs denotes the sampling rate of the
sensor.

Fig. 8. Distance minimization (left) and end(right)

4 Experiment

4.1 Experimental Protocol

In our experiment, we evaluated multi HMTs extraction accuracy of the system in a


room and employed 12 moving data. The experiment scene is shown in figure 9. The
sensor was attached at 2.7 [m]. Therefore, the measuring area becomes 2.02 [m2]. A
sampling rate of the sensor was 1.0Hz. Based on past experiment, TFl, TFh, TOm, TOv,
TPm, TPv, THl and THh were set in 1.5 [K], 3.0 [K], 2.3 [K], 0.4 [K], 4.0 [K], 1.2 [K], 3.0
[K] and 5.0 [K], respectively. In the same way, ΔTOl, ΔTOh, ΔTHl, ΔTHh were set in -0.4
44 M. Kuki et al.

[K], -0.2 [K], 1.0 [K] and 1.5 [K], respectively. eTl and eTh were set in 0.10 and 0.13,
respectively. WH was set in 3 and Nl was set in 300 samples, respectively. vmax was set
in 6.0 [elements/sec].
Layout of the room is shown at figure 10. There is a broad path in center of the
measuring area. In addition, there are two narrow paths in both side of the broad path.
In each edge of this room, there are tables, a white board, which prevents human
passing. There are several PC display on each table.

Heater

Printer
PC Display
2.0m

2.0m
Heat source Objects Floor

Fig. 9. Set up scene Fig. 10. The room layout of measurement area

4.2 Noise Reduction

In the measurement, we confirmed that thermal distributions have diagonal noise as


shown in figure 11. This noise varied in every sample shown in figure 11(top). In
previous work, we applied 5 samples smoothing to remove the noise. However, it also
removed human moving component in thermal distribution [11].
Thus, in this experiment, we applied thermal distribution to the Fourier
Transformation to remove the noise component. Here, spatial frequency domain is
calculated from the one-dimensioned thermal distribution. figure 12(left) shows a
frequency domain of the raw distribution. From the figure, the domain has peaks next
to 15[Hz], 60 [Hz] and 105 [Hz]. In past experiment, the noise is not confirmed
in a room without the electric supply line on the ceiling [10]. Therefore, this noise
would be caused by the line as shown in figure 9. Thus, the system set the peaks
as zero to remove noise, and obtained the de-noised distribution as shown in
figure 11(Bottom).
Multi Human Movement Trajectory Extraction by Thermal Sensor 45

305 K

Raw
295 K

305 K
Denoised

295 K

Time [sample]

Fig. 11. Raw thermal distribution(Top) and the de-noised distribution(Bottom)

200 200
180 180
160 160
140 140
120 120
PSD

PSD

100 100
80 80
60 60
40 40
20 20
0 0
1 16 31 46 61 76 91 106 121 1 16 31 46 61 76 91 106 121
Frequency [Hz] Frequency [Hz]

Fig. 12. Frequency domain of the noise (left) and denoised one(right)

4.3 Numerical Evaluation

We evaluate extraction accuracy from correspondence of extracted HMTs and ground


truth. In this experiment, indices GOOD, OVER, MISS are employed. GOOD becomes
high if the system extracted multi HMTs exactly. On the other hand, OVER, MISS
becomes high if the system failed to extract them. Those indices are calculated by (17).
Ns
N tnm

1
GOOD = × 100 [%],
Ns t =1 N tl
Ns
N tn m

1
OVER = × 100 [%], (17)
Ns t =1 N tl
Ns
N tnm

1
MISS = × 100 [%]
Ns t =1 N tl
Here, the notation n denotes index of extracted HMT Ppn and m denotes index of
ground truth Pom. The notation Nnm denotes the number of samples which the n-th
extracted HMT Ppn is successfully associated to the m-th ground truth Pom, N nm
46 M. Kuki et al.

denotes the number of samples which the n-th extracted HMT Ppn is not associated to
the m-th ground truth Pom. N nm denotes the number of samples which the m-th HMT
Pom is not associated any extracted HMT. Nlt denotes the number of label in t-th
sample and it satisfies (18).

N tl = N nm + N nm + N n m (18)

In these indices, the combination whether if an extracted HMT is associated to a


ground truth is determined by mean distance between extracted HMT and ground
truth from all samples. If the distance between n-th extracted HMT Ppn and m-th
ground truth Pom become the minimum distance, then the combination is determined
between n and m.

5 Experimental Result

In the experiment, the system extracted multi HMTs in the room. The room
temperature was 305.2 [K]. Table 1 and figure 13 show evaluation indices of
proposed method. figure 15 and figure 17 show the result of processed data. In the
optical images, each human position is represented by ground truth of HMT. In the
extracted HMTs images, Light color HMTs corresponding with figure 14 represent
extracted HMTs and dark color HMTs represent ground truth of HMTs.

Table 1. The average results of the experiment

Data No.
GOOD [%] OVER [%] MISS [%]
(samples)

Data #1 (251) 46.0 32.3 21.6


Data #2 (532) 67.1 28.6 4.4
Data #3 (259) 53.7 43.9 2.5
Data #4 (375) 100.0 0.0 0.0
Data #5 (184) 67.0 27.2 5.8
Data #6 (563) 65.5 9.9 24.6
Data #7 (394) 93.9 3.6 2.5
Data #8 (190) 70.5 21.5 8.0
Data #9 (289) 100.0 0.0 0.0
Data #10 (456) 49.3 41.0 9.8
Data #11 (222) 57.9 37.8 4.3
Data #12 (264) 68.5 29.5 2.0

Average ± SD 69.9 ± 18.7 22.9 ± 15.9 7.1 ± 8.0


Multi Human Movement Trajectory Extraction by Thermal Sensor 47

From Table 1 and figure 13, expect for the data #1, #3 and #9, the system obtained
high GOOD. On the other hand, in the data #1, #3 and #9, OVER was high. In the all
data, MISS was low. From these results, we confirm that the system successfully
extracted HMTs.

120
GOOD [%] OVER [%] MISS [%]
100
Evaluation rate [%]

80

60

40

20

Fig. 13. Numerical result of proposed method

From figure 15, the system successfully extracted HMTs of staying humans in the
data #2. From figure 16, the system also successfully extracted HMTs of walking
humans in the data #10. However, the system extracted a HMT of two adjoined
humans. On the other hand, from figure 17, the system failed to extract HMTs of two
crossing humans due to miss association in the data #8. In overall, although some miss
extraction has confirmed, we confirmed that the system extracted multi HMTs
successfully.

Fig. 14. Color table of HMTs (Ascending-order)

5K

0K (b) Camera image


(a) TDD (lines : Ground truth of HMT) (c) Extracted HMTs
Fig. 15. Extraction results in “Data #2”
48 M. Kuki et al.

5K

0K (b) Camera image


(a) TDD (lines : Ground truth of HMT) (c) Extracted HMTs
Fig. 16. Extraction results in “Data #10”

5K

0K (b) Camera image


(a) TDD (lines : Ground truth of HMT) (c) Extracted HMTs
Fig. 17. Extraction results in “Data #8”

6 Discussions

From the figure 18, the system failed to extract HMTs of crossing humans. In this
data, they crossed each other with adjoined and curved. Therefore, it is considered
that the Mahalanobis distance becomes smaller in different HMTs against ground
truth. To solve this problem, the system needs to consider trend of movement, such
as, going straight and curving and so on, from the past HMT.
From the figure 16, two humans are extracted as a human. In the same way,
adjoined humans are extracted as a human mainly in data #1, #3 and #9. Therefore, it
is considered that high MISS of those data is caused by adjoins of multi humans. In
daily life, there are many scene that humans are adjoined, such as, sitting in sofa with
adjoined, eating around table and so on. Therefore, a method to distinguish them is
needed.

7 Conclusions

We have proposed the multi human movement trajectories (HMTs) extraction system
with the 16 × 16 element thermal sensor. As the approach, we attached the sensor to
Multi Human Movement Trajectory Extraction by Thermal Sensor 49

the ceiling to acquire thermal distribution in whole room space. The system detected
humans based on fuzzy inference and O-F map. Next, the system extracts multi
HMTs by the Connected Component Labeling and minimization of past HMTs and
new human positions. Finally, HMTs are represented by polygonal line. In the
experiment, we measured the room. As the result, the system successfully extracted
multi human movement trajectories. Thus, our system using fuzzy inference was
suitable for extracting multi human movement trajectories in daily home environment,
such as a living room and a bedroom.
In our future works, we will estimate human posture such as standing, sitting and
lying to estimate abnormal movement. In addition, we will distinguish each adjoined
humans.

References
1. Nishioka, H., Koyama, Y., Suzuki, T., Yamauchi, M., Suga, K.: Household Projections by
Prefecture in Japan: 2005-2030 Outline of Results and Methods. The Japanese Journal of
Population 9(1) (2011)
2. Salcioglu, E., Basoglu, M., Livanou, M.: Long-Term Psychological Outcome for Non-
Treatment-Seeking Earthquake Survivors in Turkey. The Journal of Nervous and Mental
Disease 191, 154–160 (2003)
3. Naotaka, S.: Disaster mental health: lessons learned from the Hanshin Awaji earthquake.
World Psychiatry 1(3), 158–159 (2002)
4. Toshiyo, T., Atsushi, K., Masayuki, N., Akira, T., Kazuo, S., Kenichi, Y.: E-Healthcare at
an Experimental Welfare Techno House in Japan. Open Med. Inform. 1, 1–7 (2007)
5. Abrams, D.B.: Toward a Model for Collaborative Gerontechnology: Connecting Elders
and their Caregivers. In: Sixth International Conference on Creating, Connecting and
Collaborating through Computing, C5, pp. 109–114 (2008)
6. Segen, J.: A camera-based system for tracking people in real time. In: Proc. of the 13th
International Conference on Pattern Recognition (1996)
7. Kanazawa, S., Taniguchi, K., Kazunari, A., Kuramoto, K., Kobashi, S., Hata, Y.: A fuzzy
automated object classification by infrared laser camera. In: Proc. of SPIE Defence,
Security and Sensing 2011, pp. 805815-1-9 (2011)
8. Foote, C.M., Kenyon, M., Krueger, R.T., McCann, A.T., Chacon, R., Jones, W.E., Dickie,
R.M., Schofield, T.J., McCleese, J.D.: Thermopile Detector Arrays for Space Science
Applications. In: Proc. of SPIE, vol. 4999, pp. 443–447 (2003)
9. Herwaarden, V.W.A., Sarro, M.P.: Thermal Sensors Based on Seebeck Effect. Sensors and
Actuators 10, 321–346 (1986)
10. Kuki, M., Nakajima, H., Tsuchiya, N., Hata, Y.: Human Movement Trajectory Recording
for Home Alone by Thermopile Array Sensor. In: Proc. of 2012 IEEE Int. Conf. on
Systems, Man and Cybernetics, pp. 2042–2047 (2012)
11. Kuki, M., Nakajima, H., Tsuchiya, N., Tanaka, J., Hata, Y.: Multi-Human Locating in Real
Environment by Thermal Sensor. In: Proc. of 2013 IEEE Int. Conf. on Systems, Man and
Cybernetics (2013) (accepted)
12. Ministry of Education, Culture, Sports, Science and Technology-Japan: Statistical Abstract
2006 edition 3 Physical Education and Sports (2006),
http://www.mext.go.jp/english/statistics/1302984.htm
13. Zadeh, A.L.: Fuzzy Sets and Applications. John Wiley and Sons, New York (1987)
An Energy Visualization by Camera Monitoring

Tetsuya Fujisawa1, Tadahito Egawa2, Kazuhiko Taniguchi2,


Syoji Kobashi1,3, and Yutaka Hata1,3
1
Graduate School of Engineering, University of Hyogo, Hyogo, Japan
2
Kinden Corporation, Kyoto, Japan
3
WPI Immunology Frontier Research Center, Osaka University, Osaka, Japan
fujisawa_t@ieee.org

Abstract. This paper proposes an energy visualization system by a camera. For


monitoring, a single camera captures gas meter image at fixed intervals. The
system applies edge detection and the connected-component labeling to extract
numeral regions in counters of a gas mater. Gas consumption is estimated based
on shape characteristics of numerals. The system uses number of endpoints and
holes in numerical character, and it calculates a direction histogram and sum of
absolute difference (SAD). The system recognizes the numeral by fuzzy
inference from the acquired shape characteristic. When the system failed to
recognize gas consumption by some accidents, the consumption is interpolated
from time-serious data. In the result, our method estimated 32 and 29 numerals
in 33 pieces for front and slant measurement respectively. For a continual
monitoring in a day, the system successfully estimated dynamic gas
consumption change and visualized them.

Keywords: energy visualization, image processing, numeral recognition, fuzzy


inference, gas consumption.

1 Introduction

Energy consumption is increasing in Japan [1]. The energy is consumed in many


sector such as industrial sector, private local sector and transportation sector. The
private local sector has showing an increase trend. That is why a change of our
lifestyle and increase in number of the households [2]. Home energy is mainly
consumed in electricity and gas. As a solution to save these energies, energy
visualization is an effective candidate. Energy visualization systems display energy
consumption by time series graphs. In addition, it is able to know tendency of the
consumption [3]. The electricity consumption is easily measured easily by dedicated
equipment. However, the measurement method with the gas is not so practical.
For the gas consumption measurement, image processing plays a primary role.
There are optical character recognition (OCR) and template matching method [4] [5]
as representative technique to recognize numeral from images. These methods
recognize numeral by matching captured images with the letter pattern. However, in
OCR installation location of capture devices is limited by environment. In addition, in

Y.S. Kim et al. (eds.), Advanced Intelligent Systems, 51


Advances in Intelligent Systems and Computing 268,
DOI: 10.1007/978-3-319-05500-8_6, © Springer International Publishing Switzerland 2014
52 T. Fujisawa et al.

template matching method, when captured angle or size of a captured image and the
template image differ, the recognition performance decreases. Moreover, we are not
able to capture the numerals by some accidents such as occlusions by insect, human
and rain. In Japan, majorly gas meters employ mechanical counter to show the gas
consumption. When the counter is rolling, we cannot watch the exact numeral.
In this study, we propose a gas consumption visualization system based on image
processing. The merit of this system is to be the cost that is lower than a smart meter.
This is because the system uses an existing gas meter. The system estimates the gas
consumption using numeral characteristics. We employ number of endpoint, number
of hole and direction histogram as photography angle independent characters.
Moreover, we employ sum of absolute difference between captured and template
numeral images for estimation. From these characteristics, we estimate the gas
consumption based on fuzzy logic. In order to reduce influence of misrecognition and
occlusions, the system interpolates the data by median value of previous and
subsequent recognized data. In the experiment, we performed an evaluation of the
system. Then we estimated dynamic change of consumption in a day. In the result, the
system succeeded in gas consumption estimation from various angles images and did
visualization of the dynamic change.

2 The System Constitution


This section describes our system constitution. Figure 1 shows an outline of our
system constitution. This system consists of a camera and a personal computer. The
camera acquires gas meter images at same interval. Here, the gas meter images
include numeral regions, counter region and frame of the meter as shown in Figure 2.
The numerical sizes and fronts are standardized by Japanese industrial committee
(JIS) in Japan. Numeral characters are colored by white. Next, the system transmits
images to personal computer and estimates the gas consumption by image processing.
Finally, the estimated values are recorded as a comma-separated values (CSV) file
and then visualized by a time-series graph.

Gas
Time
consumption(m3)

12:00 263293

12:01 263295
Gas meter
image 12:02 263296
12:03 263298
CSV file
12:04 263299

Gas
meter Camera
equipment
Personal
computer

Fig. 1. Outline of our system constitution


An Energy Visualization by Camera Monitoring 53

Numeral
regions

Counter
Frame
region

Fig. 2. An example of gas meter image

3 Proposed Method

Figure 3 shows a flowchart of the proposed method. The proposed method consists of
counter region extraction, numeral region extraction, numeral recognition and
interpolation. Firstly, the system extracts counter region from a gas meter image from
edge information. Secondly, the system extracts numeral regions by binarization
processing and connected-component labeling. Thirdly, gas consumption is estimated
by numeral recognition by fuzzy inference based on shape characteristics of images.
Finally, the proposed method interpolates estimated gas consumption by their
previous and subsequent consumptions. For visualization, the system makes a time-
series graph from the interpolated gas consumption.

START
Captured Image

Counter region
extraction

Numeral region
extraction

Numeral
recognition

Interpolation
Gas
consumption
END

Fig. 3. Flowchart of the proposed method


54 T. Fujisawa et al.

3.1 Counter Region Extraction

The processing extracts a counter region of the gas meter from the captured image.
The counter region of the gas meter has almost occupied by black area. In addition,
ratio of its width and height is almost 2 to 5. To extract the counter region, the system
performs edge processing [6] to the captured image and obtained an edge image by
Canny filter [7]. Figure 4 shows an example of captured image and the edge image.
Firstly, this system extracts outlines from the edge image. Here, outline is defined as
very narrow white region. Secondly, the system calculates width and height of
circumscribed rectangle of an outline. The ratio of black pixel is calculated. Thirdly,
the system searches for an outline which satisfies knowledge. Finally, the system
extracts the enclosed region by the outline. Figure 5 shows an example of extracted
counter region.

Counter region
Height:3.0cm

Width:7.5cm

Fig. 4. A captured image (left) and edge image (right)

Fig. 5. An extracted counter region

3.2 Numeral Region Extraction


To extract numeral region, the system binarizes counter regions and applies the
connected-component labeling. The threshold of binarization is automatically
determined by Otsu method [8]. Left side of Figure 6 shows an example of a binarized
counter region. From the Figure 6, each numeral region becomes one connected region
with white intensity. Therefore, the connected-component labeling extracts numeral
regions from the binarized counter regions. Here, we show knowledge that each
numeral region has the same size, and locates in similar y-coordinate value. Therefore,
the system extracts the region which satisfies the knowledge. Right side of Figure 6
An Energy Visualization by Camera Monitoring 55

shows the procedure of the numeral region extraction. Firstly, the system finds a label
located in the most left in the counter region. Secondly, the system compares size and
y-coordinate value of the found label and these of the other right side labels. If the
system finds similar labels, it extracts labels as a number region. Figure 7 shows an
example of numeral region extracted by the procedure.

Fig. 6. An example of binarized counter region (left) and connected-component labeling (right)

Fig. 7. An example of extracted numeral region

3.3 Numeral Recognition

3.3.1 Shape Characteristic Extraction of the Numeral


The system recognizes the extracted numeral region based on fuzzy inference. The
system extracts number of endpoints, number of holes, similarity of direction
histogram and a similarity between a template images and thinning image as numeral
characteristics.
Firstly, the system applies thinning processing to numeral region to extract the
shape characteristic. Figure 8 shows an example of thinning image of “6”. From the
thinning image, the system extracts and counts endpoints and holes. The endpoint is
defined as a terminal pixel of the thinning image. And, hole is defined as a
background region which is enclosed by numeral region. These characteristics are
advantageous in that they are not affected by capture angle. In addition, the system
acquires two similarities between obtained numeral region and template images. The
template images are created from learning images for each numerical character. As
the first similarities, we employ Bhattacharyya coefficient Ln(I) [9]. It is an index
which represents similarities of two histograms. Here, the notation I denotes an
acquired numeral region, and n denotes an index of numeral characters (n = {0, 1, …,
9}). In our method, the system calculates the Bhattacharyya coefficient from direction
histogram DH(I) of the extracted numeral region I and DH(Tn) of the template image
Tn for n. The direction histogram is defined as the histogram of the chain code from a
thinning image. Chain code expresses the connected direction of the pixel by direction
index. A direction index expresses the connection direction of the image in eight
values. Figure 9 shows the direction indexes, an example of the chain cord, an
56 T. Fujisawa et al.

example of the direction histogram. The Bhattacharyya coefficient Ln(I) is defined by


Equation (1).
7 DH ( I )i DH (Tn )i
Ln ( I ) =  × (1)
 DH ( I )i  DH (Tn )i
7 7
i =0
i =0 i =0

Here, i denotes a direction index. And DH(I)i denotes frequency of direction “i”.
As the second similarity, we employ sum of absolute difference (SAD) Rn(I). The
SAD represents difference of the pixel value between two images. Here, the system
calculates Rn(I) of thinning image I and template image Tn by Equation (2).
N −1 M −1
Rn ( I ) =   | I ( x, y ) − Tn ( x, y ) | (2)
x=0 y=0

Here, I(x, y) denotes the pixel values of the input image in coordinate (x, y), and Tn
(x, y) denotes the pixel value of the template image. N and M denotes the width and
height of the template images, respectively.

Endpoint
Hole

Fig. 8. An example of endpoint and hole

3 2 1 Start point

4 0 End point

5 6 7 Chain code
Chain code:{445-434}

10
8
Frequency

6
4
2
0
0 1 2 3 4 5 6 7
Direction index
Fig. 9. Direction index (left), example of chain code (right) and direction histogram (bottom)
An Energy Visualization by Camera Monitoring 57

3.3.2 Numeral Recognition Using the Fuzzy Inference


This method recognizes the numeral region by the fuzzy inference [10]. At first, the
system calculates the fuzzy degree of each characteristic for all numeral. Generally,
the number of endpoints and holes of same numeral are similar (Knowledge 1). The
Bhattacharyya coefficient L becomes 1, when the input and template images are same.
Thus, Bhattacharyya coefficients of same numerals are high (Knowledge 2). The SAD
becomes 0, when the input and template images are of same. Thus, SAD of same
numerals is low (Knowledge 3). For these knowledge, the following fuzzy IF-THEN
rules are derived.
Rule 1: If number of endpoints NE(I) is CLOSEE,n to ideal endpoints of numeral n,
THEN fuzzy degree μE,n(I) is high.
Rule 2: If number of holes NH(I) is CLOSEH,n to ideal holes of numeral n, THEN
fuzzy degree μH,n(I) is high.
Rule 3: If the Bhattacharyya coefficient Ln(I) is HIGHL, THEN fuzzy degree μL,n(I)
is high.
Rule 4: If sum of absolute difference Rn(I) is LOWR, THEN fuzzy degree μR,n(I) is
high.
We defined fuzzy membership functions CLOSEE, CLOSEH, HIGHL and LOWR of these
fuzzy IF-THEN rules as shown in Figure 10. Table 1shows the ideal endpoints and
holes for each numeral. The fuzzy membership functions of Figure 10 (a) and (b)
changes by the ideal endpoints and ideal holes of numeral, respectively. Fuzzy degree
μE,n(I) denotes similarity with number of the endpoints between images. Fuzzy degree
μH,n(I) denotes similarity with number of the holes between images. Fuzzy degree μL,n(I)
denotes similarity of direction histograms between images. Fuzzy degree μR,n(I) denotes
similarity of pixel values between images. The minL, maxL, minSAD and maxSAD are set
based on experience. Each degree is calculated by following equations.

μ E , n ( I ) = min ( S Na ( N E ( I ) ) , CLOSEE , n ) (3)

μ H , n ( I ) = min ( S Na ( N H ( I ) ) , CLOSEH , n ) (4)

μ L , n ( I ) = min ( S La ( Ln ( I ) ) , HIGH L ) (5)

μ R , n ( I ) = min ( S Ra ( Rn ( I ) ) , LOWR ) (6)

Here, the fuzzy singleton function Sa(β) is defined by (7).


1 if β = α
Sα ( β ) =  (7)
0 otherwise
The fuzzy degree of μn(I) of numeral n is calculated by Equation (8).
μn ( I ) = w1 × μ E , n ( I ) + w2 × μ H , n ( I ) + w3 × μ L , n ( I ) + w4 × μ R , n ( I ) (8)
Here, w1, w2, w3 and w4 satisfy Equation (9). And, in experiment, we set the all
coefficients as 0.25.
58 T. Fujisawa et al.

w1 + w2 + w3 + w4 = 1 (9)

To recognize, the system calculate the fuzzy degree μn(I) for all n. Then, the system
finds the highest fuzzy degree μk(I), and the system recognizes the input numeral
region I as the numeral k.

Degree Degree
SNa(NE(I))
SNa(NH(I))
CLOSEE,1 CLOSEE,2 CLOSEE,4 CLOSEH,1
1.0 1.0

CLOSEH,2

CLOSEH,0
0 CLOSEE,0 0
N N
0 2 4 6 0 2 4

(a)Number of end point (b)Number of hole

Degree Degree

SLa(Ln(I)) LOWR SRa(Rn(I))


1.0 1.0
μ ( I)
L,n

μ (I)
R,n

HIGHL
0 L 0 R
min L max L min SAD max SAD

(c)Bhattacharyya coefficient L (d)SAD


Fig. 10. Fuzzy membership functions for numeral recognition

Table 1. Ideal endpoints and holes for each numeral

Numeral Ideal Ideal holes CLOSEE,n CLOSEH,n


endpoints
0 0 1 CLOSEE,0 CLOSEH,1
1 2 0 CLOSEE,2 CLOSEH,0
2 2 0 CLOSEE,2 CLOSEH,0
3 4 0 CLOSEE,4 CLOSEH,0
4 4 0 CLOSEE,4 CLOSEH,0
5 2 0 CLOSEE,2 CLOSEH,0
6 1 1 CLOSEE,1 CLOSEH,1
7 2 0 CLOSEE,2 CLOSEH,0
8 0 2 CLOSEE,0 CLOSEH,2
9 1 1 CLOSEE,1 CLOSEH,1
An Energy Visualization by Camera Monitoring 59

3.4 Missing Value Interpolation

While a counter is working, the system is not able to recognize the dynamic numeral
of the counter. Moreover, sometimes insect or human occlude the counter. Figure 11
shows an image that our system failed to recognize the numeral. To solve this
problem, the system checks size and positions of each numeral region. Firstly, we
calculate the width of the counter Wc, and then the distance between numerals
becomes Wc/8 pixel. Then, the system checks whether each numeral exists at
positions that added Wc/8 pixel from the vertical center of the numeral region. If an
absolute distance of x-coordinate value between the “m3” and center of numeral
region is longer than 40 pixels, this process is over.
Next, the system interpolates missing values and outliers. Firstly, the system
calculates median value from three previous data to three subsequent data. Table 2
shows an example of the interpolation result. In this table, the missing value and
outlier are eliminated by interpolation.
Wc pixel

Wc/8 pixel

Failed part

Fig. 11. An image that the system failed to recognize a numeral

Table 2. An example of the result of interpolation

Time Estimated value Interpolated value


t-3 277264 277264
t-2 277265 277265
t-1 277265 277265
t Not Available 277266
t+1 477267 277267
t+2 277267 277267
t+3 277267 277267
t+4 277269 277269

4 Experimental Result

4.1 Experiment in Day Time

We captured gas meter and evaluated the estimated precision of the gas consumption in
11:00 to 17:00. Figure 12 shows the experimental scene and the captured gas meter. In
this experiment, we captured a business used gas meter. The distance between gas
60 T. Fujisawa et al.

meter and the camera was 1 m to 2 m. We employed cyber-shot DSC-WX30 as the


camera. Figure 13 shows the camera and the captured image. The size of captured
images was 3608 × 4658 pixels. In this experiment, we reduced its size to 1/4 to reduce
computational complexity. We classified the image of the gas meter as the front
direction, horizontal direction and the vertical directions. We set the camera with 0-30
degree. We prepared 33 captured images from the front and 33 images from angled
directions. The experiment set minL = 0.8, maxL = 1, minSAD = 40 and maxSAD = 80.
Table 3 shows recognition results of each process. In this table, the images passed
from top process to bottom process. When an image was failed in upstream side, the
downstream side did not process for the image. From the table, we can see that our
method successfully recognized correctly extracted numeral region captured from
front of the gas meter. In the case of the vertical and horizontal of the gas meter, the
number of success was 29/33. In this result, we confirmed that this system
successfully recognized the numerals from various angles.

Gas meter Camera

Fig. 12. The experiment scene (left) and the employed gas meter (right)

Fig. 13. The employed camera (left) and the captured image (right)
An Energy Visualization by Camera Monitoring 61

Table 3. Recognition result of the gas consumption

Ratio of successfully processed images


The front of The vertical and horizontal of
the gas meter the gas meter
Counter region extraction 33/33 32/33
Binary image processing 32/33 30/32
Numeral region extraction 32/32 30/30
Numeral recognition 32/32 29/30

4.2 Experiment in Day and Night Time

We captured a gas meter during a day and estimated gas consumption. We employed
the same gas meter as shown in the Experiment 4.1. In this experiment, the camera
was set in the front direction at distance of gas meter 1m. The system captured the gas
meter every ten minutes. The measurement time is from 5:30PM to 6:10PM(next
day). We employed Optio WG-2 GPS as the camera. Figure 14 shows the image
captured by the camera. In night, the camera used a flash bulb to capture. The size of
captured images was 1920 × 1080 pixels.
Table 4 shows the recognition results. From this table, our system estimated 120
images in 149 captured images. Figure 15 visualized the result of the gas consumption
graph. In this figure, the red line shows truth value, green line shows estimated value,
and red line shows interpolated value. The yellow region shows missing values. From
this figure, we can see that our method corrected the misrecognition by the
interpolation process. We obtained was 0.3 m3 in mean absolute error between
the truth and interpolated value, and was 3 m3 in maximum error. We confirmed that
the interpolated value was reproduced a truth value.

Fig. 14. An example of the captured image


62 T. Fujisawa et al.

Table 4. Recognition result of the gas consumption

Gas meter image


Counter region extraction failure 133/149
Binarization processing failure 127/133
Numeral region extraction failure 124/127
Numeral recognition failure 120/124

277340

277320 Truth value


Estimated Value
Gas consumption(m3)

277300 Interpolated value

277280

277260

277240

277220
0:30
1:30
2:30
3:30
4:30
5:30
6:30
7:30
8:30
9:30
17:30
18:30
19:30
20:30
21:30
22:30
23:30

10:30
11:30
12:30
13:30
14:30
15:30
16:30
17:30

Time

Fig. 15. Visualized graph of gas consumption

5 Conclusions

These sections first considers the image which failed to estimate gas consumption in
experiments.
In Experiment 4.2, the system had bad estimated precision of the estimated value
in the night. Figure 16 shows the failed image. The image of Figure 16 was captured
in night. In night, the system often failed to extract counter region. It is thought that
an edge was emphasized by flash. The system did not extract a counter region while
strong sunshine is also influenced to the image in Figure 17. Therefore, it is necessary
to consider the light condition of the counter region by the system. Thus, our system
successfully monitors the gas consumption and visualizes the trend graph in day and
night time. This system is also available to gas leak detection by analyzing the gas
consumption trend in real time processing. Furthermore, it is available to monitor
home alone people by analyzing the no gas consumption data. Thus, this system is
An Energy Visualization by Camera Monitoring 63

useful for energy consumption problem as well as watching of home alone people,
especially elderly home alone.
In the future, we investigate a gas consumption of the domestic gas in a day. In
addition, we improve recognition accuracy in night.

Fig. 16. Sample of the binary image (left), the edge image (right) and extracted counter region
extraction (bottom) in night

Fig. 17. Sample of the counter region (left) and the binary image (right) in day time.

References
1. Murakanmi, S., Bogaki, K., Tanaka, T., Hayama, H., Yoshino, H., Akabayashi, S., Inoue,
T., Iio, A., Hokoi, S., Ozaki, A., Ishiyama, Y.: Detail Survey of Long-Term Energy
Consumption for 80 Houses in Principal Cities of Japan - Description of the houses and
end use structure of annual energy consumption. J. Environ. Eng., AIJ 603, 93–100 (2006)
(in Japanese)
2. Yamazaki, T., Jung, J., Kim, Y., Hahn, M., Toyomura, T., Teng, R., Tan, Y., Matsuyama,
T.: Energy Management in Home Environment Using a Power Sensor Network. Technical
Report of IEICE 107, 71–76 (2008) (in Japanese)
64 T. Fujisawa et al.

3. Yamamoto, S., Takahashi, K., Okushi, A., Matsumoto, S., Nakamura, M.: A study of
services using large-scale house log in Smart city. Technical Report of IEICE 112, 19–24
(2012) (in Japanese)
4. Wakabayashi, T., Tsuruoka, S., Kimura, F., Miyake, Y.: Study on feature selection in
handwritten numeral recognition. Transactions of the Institute of Electronics, Information
and Communication Engineers, J78-D-o.11, 1627–1638 (1995) (in Japanese)
5. Hata, Y., He, X., Miyawaki, F., Yamato, K.: Japanese Document Reader System. In: Proc.
of the 2nd Singapore Int. Conf. on Image Processing, pp. 194–197 (1992)
6. Raman, M., Himanshu, A.: Study and Comparison of Various Image Edge Detection
Techniques. International Journal of Image Processing 3, 1–12 (2009)
7. Canny, J.: A Computational Approach to Edge Detection. IEEE Trans. Pattern Analysis
and Machine Intelligence 8, 679–714 (1986)
8. Otsu, N.: A Threshold Selection Method from Gray-Level Histograms. IEEE Transactions
on Systems, Man, and Cybernetics, SMC-9, 62–66 (1979)
9. Sohail, K., Umar, I., Saquib, S., Asim, A.: Bhattacharyya Coefficient in Correlation of
Gray-Scale Objects. Journal of Multimedia 1, 56–61 (2006)
10. Kanazawa, S., Taniguchi, K., Asari, K., Kuramoto, K., Kobashi, S., Hata, Y.: A Fuzzy
Automated Object Classification by Infared Laser Camera. In: Proc of SPIE, vol. 8058
(2011)
Ultrasonic Muscular Thickness Measurement
in Temperature Variation

Hideki Hata1, Seturo Imawaki2, Kei Kuramoto1,3, Syoji Kobashi1,3, and Yutaka Hata1,3
1
Graduate School of Engineering, University of Hyogo, Japan
2
Ishikawa Hospital, 784 Bessho Himeji, 671-0221, Japan
3
WPI Immunology Frontier Research Center, Osaka University
hatahideki@ieee.org

Abstract. This paper proposes a muscular thickness measurement method using


acoustic velocity dependency according to temperature. It is known that the
acoustic velocity for temperature change depends on the materials is slower
than warm ones. From this principal, we measured the muscular thickness. We
employ a 1.0 MHz ultrasonic probe, and acquire two kind ultrasonic echoes
from same position of body with temperature variation. From these echoes, we
extract boundary surface echoes. From echoes, regions of muscular and fat are
extracted by difference between the acoustic velocity-temperature
characteristics of muscular and fat. In our experiment, we employ a piece of
pork as an experimental phantom, and we acquire ultrasonic echoes reflected
from the phantom. Our proposed method successfully measured the thicknesses
in muscular and fat region.

Keywords: ultrasonic, boundary surface echo, ultrasonic velocity, propagation


time.

1 Introduction

Elders who need daily care are increasing in Japan [1], [2]. In 2010, 3.89 million
elders received certification of long-term care. As the factor, bedridden caused by
decline in physical function accounted for 13.1% [3]. According to the surveys,
muscle shortage is mentioned as a cause of their physical decline. Moreover, muscle
mass of elderly people is about 68% of it [4]. Exercise is important in order to prevent
the decline of muscle. However, they lack exercise due to burden of exercise caused
by the decline of muscle, change of life style habit and so on. Thus, it causes further
decline of muscle. What is worse, lack of exercise causes more physical function and
reduction of social activity. To solve those problems, we need to change their
awareness of exercise. However, excessive exercise may cause circulatory collapse or
fracture by fall. Therefore, it is important to estimate appropriate exercises for each
elder. Thus, we develop a system to teach it to them that is suitable for each person by
estimating their muscle mass.
In present studies, muscle mass is estimated by a body composition monitor for
home use. However its reliability is low because it estimates the mass by body fat

Y.S. Kim et al. (eds.), Advanced Intelligent Systems, 65


Advances in Intelligent Systems and Computing 268,
DOI: 10.1007/978-3-319-05500-8_7, © Springer International Publishing Switzerland 2014
66 H. Hata et al.

percentage based on bioelectrical impedance method and body weight [5]. In other
method, MR (Magnetic Resonance) device is employed to measure the mass
accurately. However, it is limited in hospitals. In addition, it is costly and time
consuming. Therefore, we propose muscular thickness measurement method using
ultrasonic. In our system, real-time measurement is possible and the ultrasonic device
is portable [6]. Therefore, it is able to use at home, and then it reduces their burden. In
conventional method with ultrasonic wave, ultrasonic B-mode imaging is employed to
measure muscles of the inside of the human body. However, it is difficult to identify
fat region and muscular region [7]. Here, we employ an ultrasonic principle that the
ultrasonic velocity changes in different temperatures and tissue. With increasing
temperature, it increases in muscle. On the other hand, it decreases in fat [7]. By using
this principle, the proposed system estimates fat and muscle areas.
In our proposed method, we employ an ultrasonic single probe to irradiate and
acquire ultrasonic waves. For measurement, we acquire two ultrasonic waves from
same position with temperature variation. The method detects boundary surfaces
between muscle and fat from reflected echo. We classify each region which is
delimited by detected surfaces to fat or muscle region by variation of ultrasonic
propagation times of the two acquired echo. From the classification result and
propagation time, our method calculates the thickness of each region. In our
experiment, we made a quad-layer phantom by using fat and muscle of pork. We
acquired ultrasonic waves from the phantom, and estimated thickness of each layer
with 0.421 mm mean absolute error.

2 Proposed Method

Our experimental system is shown in Figure 1. In this system, the probe irradiates
ultrasonic wave to a measurement object in contact, and then it receives reflected
waves. In this study, we employ an ultrasonic measurement system using a single
probe. We employed 1.0MHz frequency single probe whose depth of field is high.
The probe is shown in Figure 2. The received waves are provided to a personal
computer through pulsar receiver and digital oscilloscope (Pico Technology,
PicoScope 4227). The sampling interval is 4.0 ns. In our experiment, temperature of
measurement object is increased by a thermostatic water tank. A waterproof digital
thermometer (SATO, SK-1250MC II) measures internal temperature of the object.
Figure 3 shows a flowchart of our proposed method. Firstly, we acquire an
ultrasonic waveform of the measurement object as a baseline waveform uB(t). Here,
the notation t denotes a sampling time. Secondly, we rise up temperature of the
thermostatic water tank. And then, we acquire an ultrasonic waveform from same
position of the warmed object as a warmed waveform uW(t). Thirdly, from the baseline
data, we extract boundary surface echoes xB,j(tB,j+t) between fat and muscle. The
notation j denotes an index of the echoes, and it is numbered from surface to bottom
of the object. The notation tBj does the start time of the surface echo. Fourthly, form
the warmed waveform, we detect boundary surface echoes reflected from same
boundary of baseline waveform. The correlation coefficients detect these boundary
Ultrasonic Muscular Thickness Measurement in Temperature Variation 67

surface echoes. From these boundaries, we distinguish muscle from fat in the
measurement object. Finally, the muscular and fatty thicknesses are calculated from
ultrasonic propagation time.

Pulsar receiver

Personal
computer
Single probe

Thermometer Biological
phantom

Digital Acrylic
oscilloscope plate
Thermostatic water tank

Fig. 1. Ultrasonic waveform acquisition system

30.20mm

14.11mm

Fig. 2. Ultrasonic single probe

Start


Step1 Acquire
z waveforms

Step2. Extract boundary


surface echoes

Step3. Explore boundary


surface echoes


Step4 Divide each region

Step5.Calculate thickness Ultrasonic velocity


of each region and propagation time

End

Fig. 3. Flowchart of proposed method


68 H. Hata et al.

2.1 Boundary Surface Determination

On the boundary surface of different acoustic impedance, ultrasonic waves are


reflected. Therefore, we can acquire an echo reflected on the boundary between
muscle and fat regions. Figure 4 shows an example of the boundary surface echo. As
shown in this figure, the echo has larger amplitude value than the other echoes.
Therefore, we are able to extract the echo by thresholding process. In this paper,
threshold th is empirically determined as 20% of the maximum amplitude of the
waveform. Our method extracts the boundary surface echoes xB,j(tB,j+t) from the
baseline waveform uB(t). Firstly, we detect a start time tB,j for each echo. We explore
a time whose received voltage value crosses the threshold first time. The example of
the point is shown in the red circle in Figure 4. We define the zero-cross point
immediately before the explored time as start time tB,j. Secondly, we extract end time
of boundary surface echo. As shown in green circle of Figure 4, we explore the last
time whose received voltage value is less than the threshold at boundary surface echo.
And, we determined the end time by zero-cross point immediately after the explored
time. Finally, the waveform from the start time to the end time is extracted as
boundary echo xB,j(tB,j+t). By the processing, we can extract several boundary echoes
from a waveform acquired from multi-layer object. We assume that a region between
two adjacent boundary echoes xB,j(tB,j+t) and xB,(j-1)(tB,(j-1)+t) consists of single tissue.
To compare the ultrasonic propagation times between baseline waveform uB(t) and
warmed waveform uW(t), we detect start times tW,j which correspond with the start
times of baseline waveform tB,j. However, temperature rise may cause voltage changes
as shown in Figure 5. In this case, the start time of the echo is significantly sifted by
thresholding method. Therefore, we extract corresponding boundary surface echoes
based on correlation coefficient between boundary surface echoes on baseline
waveform and warmed waveform. Conceptual diagram of detecting corresponding
boundary surface echo is shown in Figure 6. Firstly, we explore every zero-cross
point from the warmed waveform as candidate of start time sk (k=1 2 M) of , ,…
corresponding boundary echo. Next, we calculate correlation coefficient Rj,k between
the candidate and extracted boundary echoes by formula (1).
nj

{x B, j (tB, j + t) − xa}{uW (sk +t) −ua}


Rj,k = t=1
(1)
nj nj

{xB, j (tB, j +t) − xa} {uW (sk +t) −ua}


2 2

t =1 t=1

Here, xB,j(tB,j+t) denotes j-th boundary surface echo of baseline waveform. The
notation nj and xa a data length and mean value of the xB,j(tB,j+t) , respectively. .In
addition, the notation ua denotes a mean value of warmed waveform from sk to sk + nj.
We consider that small temperature change does not cause large propagation time
shift. Thus, we assume corresponding echo exists similar time. In addition, because
the echoes are appeared by a sequence, the (j+1)-th echo is appeared after j-th echo.
From these assumptions, we detect the corresponding echoes in turn. To detect the
corresponding echo for 1st echo, we calculate the correlation coefficient Rj,k for any sk
among from 0 to tB,j + 10 μs. Then, we select a candidate with the highest correlation
Ultrasonic Muscular Thickness Measurement in Temperature Variation 69

coefficients as the corresponding start time tW,j. Here, if the candidate sc is selected,
next corresponding echo happens after it. Therefore, to detect the next corresponding
echo, we calculate the correlation coefficient Rj,k for any sk among from s(c+1) to tB,2 +
10 μs. In this detection process, we repeat the calculation and selection for all j.

Voltage Boundary
Threshold surface echo
th
0 Time

Start time End time


tB,j tB,j+nj

Fig. 4. Example of boundary surface echo

Voltage
th
Start time
0 Time

before temperature rises

Voltage

th
Start time
0 Time

after temperature rises

Fig. 5. Start time change by amplitude value change

Boundaly surface Boundaly surface


Voltage echo j-1 echo j
th
0 Time
tB,(j-1) tB,j

Rjk
Voltage
th
0 s sk-2 sk-1 Time
k-3 sk sk+1

Echo Echo Echo Echo Echo


k-3 k-2 k-1 k k+1
Fig. 6. Conceptual diagram of specify same boundary surface echo
70 H. Hata et al.

2.2 Thickness Determination

The system compares ultrasonic wave propagation time in the same region delimited
by boundary surface echoes between baseline and warmed waveform. In this paper,
ultrasonic wave propagation time Tj is defined from start time of the boundary surface
echo of the j-th and start time of the boundary surface echo of the (j-1)-th. The
propagation times are calculated by formula (2) and (3).

TB, j = tB , j − t B,( j −1) [s] (2)

TW , j = tW , j − tW ,( j −1) [s] (3)

In this research, ultrasonic single probe is in contact with a measurement object,


therefore distance between probe and a measurement object is zero. Therefore, start
time tB,0 of boundary surface echo from the first surface is zero. Next, in the theory,
ultrasonic velocity in muscular is rising, and it in fat is reduced with rising
temperature. From this theory, it is able to be estimated that of propagation time is
muscular, and risen area of propagation time is muscular with temperature rise. The
thickness L of the area is calculated formula (4) [8].

v jT j
L= [m] (4)
2
Here, notation v denotes ultrasonic velocity in medium, and is determined by two
propagation times by Formula (5) [9].

1430 if Tw, j > TB, j


vj =  [m/s] (5)
1560 otherwise

3 Experimental of Result

3.1 Preliminary Experimental

We verified velocity-temperature characteristic of fat and muscular phantom shown in


Figure 7. Thickness of fatty phantom was 23.00 mm, and muscular phantom was
15.95mm. The range of temperature is from 308.0 K to 311.0 K, and its measurement
interval was 1.0 K. We calculated ultrasonic velocity from thickness of these
phantoms and the propagation times of the ultrasonic. Experimental result is shown in
Figure 8. From Figure 8, ultrasonic velocity in muscular was rising, and it in fat was
reduced with rising temperature.
Ultrasonic Muscular Thickness Measurement in Temperature Variation 71

Measurement Measurement
point point

23.00mm 15.95mm

Fig. 7. Fatty phantom (left) and muscular phantom (right)

1650
Ultrasonic velocity(m/s)

1600

1550

1500 muscular
fat
1450

1400

1350
307 308 309 310 311 312
Temperature(K)

Fig. 8. Ultrasonic velocity-Temperature characteristic in muscular and fatty fantom

3.2 Experiment with Multilayer Phantom

We employed multilayer phantom shown in Figure 9 to test our region estimation and
thickness measurement method. The phantom was constructed by quad-layer. The
first layer and third layers were made by fat of pork, and second and fourth layers
were done by muscular. In addition, the thickness of layers were 5.78 mm, 8.47 mm,
6.48 mm and 11.45 mm, respectively. We measured ultrasonic waveforms during
temperature rising up from 308 K to 311 K, and its measurement interval was 1.0 K.
Figure 10 shows a measurement waveform at 308K. In our experiment, we employed
the waveform as the baseline waveform. By using the proposed method, we specified
boundary surface echo. As a result, we got four boundary surface echoes. It was
matched with the number of layers. Next, we explored all candidates of start time
from warmed waveform measured at 309 K, 310 K and 311 K waveforms. In
addition, we calculated the correlations between these candidates and boundary echo.
As the result, Figure 11 shows waveform and candidates of echoes, and Table 1
shows correlation coefficient between boundary surface echo at 308 K and echo at
309 K. From this table, we detected the corresponding echoes. Echo 1, 5, 10 and 11
were detected as the start time of corresponding boundary echoes. Next, we did the
above processing to 310 K and 311 K, and specified same echo as each boundary
72 H. Hata et al.

surface echo at 308 K. Specified start time of each boundary surface echo is shown in
Table 2. We calculated propagation time at each region by using formula (2) and result
of Table 2. Table 3 shows calculated ultrasonic propagation time. From Table 3,
change of propagation time at each region was increase, decrease, increase decrease
order from measurement point. Therefore, it is be able to be considered each region is
fat, muscular, fat, muscular order from measurement point. This estimation agreed
actual placement of phantom used in this experiment. Next, we calculated thickness of
each region by using formula (3) and result of Figure 8. Table 4 shows calculated
thickness. And, measurement errors are shown in Table 5. From Table 5, measurement
error of the first layer in all temperature were larger than other layers.

Measurement
point
5.78mm

8.47mm

6.48mm

11.45mm

Fig. 9. Multilayer phantom

6
5
4
Echo 4
3 Echo 1 Echo 2
2
Voltage(V)

Echo 3
1
0
-1
-2
T1 T2 T3 T4
-3
s1 s2 s3 s4
-4
0 10 20 30 40 50 60
Time(μs)

Fig. 10. Measurement waveform at 308 K


Ultrasonic Muscular Thickness Measurement in Temperature Variation 73

6
5
s4
4 s2 s8 s14
s6 s12
3 s10
2

Voltage(V)
1
0
-1
-2 s15 s11
-3 s1 s5 s13
s7 s9 s15
-4 s3
0 10 20 30 40 50 60
Time(μs)

Fig. 11. Start time of echo at 309 K

Table 1. Correlation coefficient of each boundary surface echo at each temperature

308 K
Time [μs]
Boundary 1 Boundary 2 Boundary 3 Boundary 4
Echo 1 6.970 0.985
Echo 2 8.242 0.244 0.260
Echo 3 8.851 -0.210 0.227
Echo 4 9.735 0.402 -0.002
Echo 5 17.42 0.996
Echo 6 19.58 -0.278 -0.773
Echo 7 20.41 -0.110 0.562
309
Echo 8 21.36 0.544 -0.575
K
Echo 9 26.36 -0.371 -0.878
Echo 10 27.46 0.996
Echo 11 41.97 0.991
Echo 12 42.81 -0.587
Echo 13 43.41 0.488
Echo 14 44.11 -0.243
Echo 15 44.85 -0.165

Table 2. Start time of each boundary surface echo

Temperature Boundary 1 Boundary 2 Boundary 3 Boundary 4


[K] [μs] [μs] [μs] [μs]
308 6.970 18.416 27.457 41.971
309 6.987 18.353 27.475 41.956
310 6.990 18.333 27.502 41.842
311 7.004 18.336 27.528 41.948
74 H. Hata et al.

Table 3. Propagation time at each region

Temperature Region 1 Region 2 Region 3 Region 4


[K] [μs] [μs] [μs] [μs]
308 6.970 11.446 9.041 14.514
309 6.987 11.366 9.122 14.482
310 6.990 11.343 9.170 14.340
311 7.004 11.332 9.192 14.420

Table 4. Thickness of each region

Temperature Region 1 Region 2 Region 3 Region 4 Material


[K] [mm] [mm] [mm] [mm]
308 4.949 8.963 6.419 11.365 fat
309 4.937 8.956 6.444 11.412 muscle
310 4.914 8.972 6.446 11.300 fat
311 4.906 8.992 6.439 11.478 muscle

Table 5. Thickness error of each region

308 K 309 K 310 K 311 K


[mm] [mm] [mm] [mm]
Region 1 -0.831 0.493 -0.061 0.265
Region 2 -0.843 0.486 -0.036 0.312
Region 3 -0.866 0.502 -0.034 0.2
Region 4 -0.874 0.522 -0.041 0.378

4 Discussion
In this paper, we proposed the method of specifying muscular region and fatty region.
In addition, the method calculated thickness of each region. From Table 4, it can be
seen calculation result was not much different between baseline waveform and
warmed waveform at each region. In addition, thickness error was less than 1.0 mm.
Thickness of the thigh of the elderly is about 20 mm [10]. From the above, we
considered proposed method measured thickness high accuracy. Next, the largest
error of thickness occurred in the first layer at all calculation results. Here, the first
layer was contact with ultrasonic single probe. Therefore, we consider thickness
change occurred by single probe at first layer.

5 Conclusion
In this research, we confirmed different of ultrasonic velocity-temperature
characteristic between fat and muscular. Based on this, we proposed the method for
specifying muscular region and fatty region. Firstly, the method extracted start time of
boundary surface echo. Secondly, the method divided region by the start time.
Ultrasonic Muscular Thickness Measurement in Temperature Variation 75

Finally, the method compared propagation time in the region between baseline
waveform and warmed waveform. In addition, after specifying each region, the
method calculated thickness of each region. As a result, calculation result is high
accuracy. On the future studies, we will improve the accuracy of the thickness
measurement, to measure muscular thickness of the human body by using infrared
light for temperature rise.

References
1. Ministry of Health, Labour and Welfare, Annual changes in population dynamics overview
(2010), http://www.mhlw.go.jp/toukei/saikin/hw/jinkou/
kakutei10/dl/04_h2-1.pdf
2. Ministry of Health, Labour and Welfare, System in accordance with the need of nursing
care (2010),
http://www.mhlw.go.jp/topics/kaigo/nintei/gaiyo1.html
3. Ministry of Health, Labour and Welfare, Trends in the number of Qualified Person of the
level of care required (2010),
http://www.mhlw.go.jp/toukei/saikin/hw/jinkou/kakutei10/
dl/04_h2-1.pdf
4. Research of JARD (2001),
http://www5f.biglobe.ne.jp/~rokky/siki/JARD2001_04.pdf
5. Kai, Y., Fujino, H., Murata, S., Takei, K., Murata, J., Tateda, I.: Relationships among
Body Composition, Upper and Lower Limb Muscle Strength and Circumferences of the
Extremities. Physical Therapy Science 23(2), 241–244 (2008) (in Japanese)
6. Shiina, T., Yamakawa, M., Nitta, N., Ueno, E., Matsumura, T., Tamano, S., Mitake, T.:
Clinical Assessment of Real-time, Freehand Elasticity Imaging System Based on The
Combined Autocorrelation Method. In: IEEE Ultrasonic Symposium (2003)
7. Horinaka, H., Sakurai, D., Sano, H., Ohara, Y., Maeda, Y., Wada, K., Matsunaka, T.:
Optically assisted ultrasonic velocity change images of visceral fat in a living animal. In:
Proc.of IEEE Ultrasonics Symposium 2010, pp. 1416–1419 (2010)
8. Lamberti, N., Ardia, L., Albanese, D., Di Matteo, M.: An ultrasound technique for
monitoring the alcoholic wine fermentation. Ultrasonics 49, 94–97 (2009)
9. Nakamura, Y., Fujihata, K., Yanagisawa, T., Hu, A., Nakamura, Y., Matsuzaki, Y.:
Measurement of transmission characteristics of ultrasound on the distal radius. Technical
Report of IEICE (in Japanese)
10. Ikezoe, T., Asakawa, Y., Shima, H., Ichihashi, N.: Age-related Changes on Muscle
Architectural Characteristics and Strength in the Human Quadriceps. Physical Therapy
Science 34(5), 232–238 (2007) (in Japanese)
Regional Analysis and Predictive Modeling
for Asthmatic Attacks in Himeji City

Sho Kikuchi1, Yusho Kaku1, Kei Kuramoto1,2, Syoji Kobashi1,2, and Yutaka Hata1,2
1
Graduate School of Engineering
University of Hyogo
Hyogo, Japan
kikuchi_sho@ieee.org
2
WPI Immunology Frontier
Research Center
Osaka University
Osaka, Japan

Abstract. The number of asthmatic attacks was predicted by a time series data
analysis in the areas divided into the coastal place and the inland place in
Himeji city. As a result, SARIMA model obtained the highest total of
CC=0.733, MAPE = 13.4 in inland place, and AR model obtained the highest
total of CC=0.549, MAPE = 13.9 in coastal place. The prediction in inland
place got enough precision. On the other hand, the prediction in the coastal
place didn’t get enough precision. Therefore, it was confirmed that the
prediction in some areas by time series models was difficult.

Keywords: asthmatic attack, AR model, SARIMA model, healthcare system,


prediction model, time-series data.

1 Introduction
According to the report of the government in 2011, the number of asthmatic attacks
was about 8 million in Japan. The asthmatic attack is chronic inflammation of
bronchus, and makes patient's breathing difficult. It has relations with air temperature,
atmospheric pressure, humidity, ticks, air pollution and so on. In worst case, it leads
to death due to dyspnea. However, if asthmatics take inhaled steroid before the attack
happens, they can prevent it.
In our study group, we already predicted the number of asthmatic attacks according
to patient generations in Himeji city by Fuzzy-AR model, and we showed that
predictive precision variation according to the generations [1]-[2]. Generally, the
asthmatic attacks in case of children and adults are caused by different factors. In case
of children, the many cases are caused by specific allergen. They called atopic
asthma. It is possible that allergen of atopic asthmatics are specified by checking their
antibody. In case of adults, it is impossible that their allergens are specified. They
called non-atopic asthma. It has relations with cold, cigarette smoke, stress, chemical
substance and so on. In the precedent study, we considered useful of the prediction
according to groups that have different causes.

Y.S. Kim et al. (eds.), Advanced Intelligent Systems, 77


Advances in Intelligent Systems and Computing 268,
DOI: 10.1007/978-3-319-05500-8_8, © Springer International Publishing Switzerland 2014
78 S. Kikuchi et al.

With a population of 500,000 people, Himeji city is second place inside the
prefecture in population, commerce and industry [3]. The coastal place in Himeji
city has factory zone, and the inland place has abundant nature. Thus, natural
environments in the coastal place are dramatically different from that of inland. In this
paper, we predict the number of asthmatic attacks according to region in Himeji city,
and investigate the relation between asthmatic attacks and regional characteristics. As
results, the prediction in the inland place got enough precision. On the other hand, the
prediction in the coastal place didn’t get enough precision.

2 Prediction Method

In this study, as a prediction method for the number of asthmatic attacks, we employ
the autoregressive(AR) model and seasonal autoregressive moving average(SARIMA)
model which are employed for analyzing an economic system [4]-[6]. In addition, we
consider multi factors by fuzzy logic. Thus, we make the fuzzy AR model and the
fuzzy SARIMA model by adding fuzzy logic to the AR model and the SARIMA
model [7]-[11]. In this study, air temperature, atmospheric pressure and humidity are
considered to predict the number of asthmatic attacks.

2.1 The Fuzzy AR Model

The fuzzy AR model is defined by (1).


p
y(t ) =  a(i ) x(t − i ) + u (t ) + ( μ − 0.5) * w
i =1
(1)
μT + μ P + μ H
μ =
3
Here, y(t) denotes a predicted value of time t, x(t) does a observed value, a(i) does AR
parameter, u(t) does white noise, p does the order, and w does a weighting value. We
consider three knowledge about the relation of asthmatic attacks and climate.

Knowledge 1: Asthmatic attacks are influenced by air temperature.


Knowledge 2: Asthmatic attacks are influenced by atmospheric pressure.
Knowledge 3: Asthmatic attacks are influenced by humidity.
According to these knowledge, the following fuzzy IF-THEN rules are derived.
Rule 1: IF mean air temperature of the month is lower than mean air
temperature of same months of previous, THEN the degree of the
air temperature μT is high.
Rule 2: IF mean atmospheric pressure of the month is lower than mean
atmospheric pressure of same months of previous, THEN the degree
of the atmospheric pressure μP is high.
Regional Analysis and Predictive Modeling for Asthmatic Attacks in Himeji City 79

Rule 3: IF mean humidity of the month is higher than mean humidity of


same months of previous, THEN the degree of the humidity μH is
high.
According to Rule 1 to Rule 3, fuzzy membership functions are defined as shown in
Fig. 1. In Fig. 1, T denotes mean air temperature of prediction month, Tave does mean
air temperature of same months of previous, P does mean atmospheric pressure of
prediction month, Pave does mean atmospheric of same months of previous, H does
mean humidity of prediction month, and Have does mean humidity of same months of
previous.

degree degree
STx(T) TEMPERATURE SPx(P) PRESSURE
1.0 1.0
μT μP
0.5 0.5

0 T(K) 0 P(hPa)
Tave-CT Tx Tave Tave+CT Pave-CP Px Pave Pave+CP
(a) Temperature (b) Atmospheric pressure
degree
SHx(H) HUMIDITY
1.0
0.5
μH
0 H(%)
Have-CH Hx Have Have+CH
(c) Humidity
Fig. 1. Fuzzy membership functions

2.2 The Fuzzy SARIMA Model


The fuzzy SARIMA model is defined by (2).

φ ( B )Φ( B s ) wt = θ ( B )Θ( B s )u (t ) + ( μ − 0.5) * w

φ ( B) = 1 − φ1B − φ2 B 2 −  − φ p B p
Φ( B s ) = 1 − Φ1B s − Φ 2 B 2 s −  − Φ P B Ps (2)
θ ( B) = 1 − θ1B − θ2 B −  − θq B
2 q

Θ( B s ) = 1 − Θ1B s − Θ2 B 2 s −  − ΘQ B Qs
wt = ∇ sD∇ d x(t )
80 S. Kikuchi et al.

Here, denotes AR parameter, Φ does seasonal AR parameter, θq does moving


average (MA) parameter, ΘQ does seasonal MA parameter, p does the order of AR, P
does the order of seasonal AR, q does the order of MA, Q does the order of seasonal
MA, d does the order of difference, and D does the order of seasonal difference. In
this study, each parameter is set up by maximum likelihood procedure, and each order
is set up by AIC [12]-[13].

3 Results and Discussion

3.1 Experiment Method

We predict the number of asthmatic attacks next month in whole Himeji city by
prediction models. In order to construct each model, the models use the past data. In
this study, the AR model and the SARIMA model use the number of asthmatic attacks
in every month from 2001 to 2007 as database, and predict the number of asthmatic
attacks every month from 2008 to 2011. In addition to them, the fuzzy AR model and
the fuzzy SARIMA model use the weather data which is composed by air
temperature, atmospheric pressure, and humidity. The data of asthmatic attack are
acquired from the database of Himeji city Medical Association [14]. The weather data
are acquired from the database of Japan Meteorological Agency [15]. We compared
the prediction results between the AR model, the SARIMA model, the fuzzy AR
model, and the fuzzy SARIMA model by coefficient of correlation (CC) and mean
absolute percentage error (MAPE). In addition, we compare the results by the area
which are provided by similar experiment for the coastal place and the inland place in
Himeji city.

2000
the number of asthmatic attacks(cases)

1800
1600
1400
1200
1000
800
observed value
600
AR model
400 SARIMA model
200 Fuzzy-AR model
Fuzzy-SARIMA model
0
2008 2009 2010 2011 (year)

Fig. 2. Prediction results in whole Himeji city


Regional Analysis and Predictive Modeling for Asthmatic Attacks in Himeji City 81

3.2 Prediction Results for Whole

Fig. 2, shows the predicted results in whole Himeji city. According to Fig. 2,
prediction models seemed to be able to predict the number of asthmatic attacks. Table
1 shows the predicted results in whole Himeji city of CC and MAPE. Shown in Table
1, AR model obtained the highest total of CC=0.655. On the other hand, prediction
precision was reduced when each prediction models were added fuzzy rules.
Table 1. Predicted results in whole Himeji city

AR SARIMA Fuzzy-AR Fuzzy-SARIMA


CC MAPE CC MAPE CC MAPE CC MAPE
2008 0.695 14.2 0.566 16.7 0.575 15.9 0.449 18.1
2009 0.675 10.9 0.477 14.3 0.746 9.1 0.509 13.5
2010 0.567 12.9 0.709 8.6 0.596 14.0 0.730 9.4
2011 0.684 11.8 0.719 12.2 0.497 13.3 0.437 13.3
total 0.655 12.5 0.618 13.0 0.604 13.1 0.531 13.6

3.3 Prediction Results for the Inland Place


Fig. 3, shows the predicted results of the inland place in Himeji city. According to
Fig. 3, prediction models seemed to be able to predict the number of asthmatic
attacks. Table 2 shows the predicted results of the inland place in Himeji city of CC
and MAPE. Shown in Table 2, SARIMA model obtained the highest total of
CC=0.733. On the other hand, Fuzzy-SARIMA model obtained CC=0.679, prediction
precision was reduced when each prediction models were added fuzzy rules. In
comparison with the results of the whole Himeji city, prediction result in the inland
place got enough precision.

1000
the number of asthmatic attacks(cases)

900
800
700
600
500
400
300 observed value
AR model
200 SARIMA model
100 Fuzzy-AR model
Fuzzy-SARIMA model
0
2008 2009 2010 2011 (year)
Fig. 3. Prediction results of the inland place in Himeji city
82 S. Kikuchi et al.

Table 2. Predicted results of the inland place in Himeji city

AR SARIMA Fuzzy-AR Fuzzy-SARIMA


CC MAPE CC MAPE CC MAPE CC MAPE
2008 0.880 11.8 0.835 13.1 0.820 11.2 0.732 14.4
2009 0.623 13.5 0.620 16.3 0.627 12.3 0.626 14.2
2010 0.491 14.7 0.600 16.3 0.567 13.6 0.638 16.6
2011 0.631 12.1 0.877 7.9 0.410 13.0 0.718 10.0
total 0.656 13.0 0.733 13.4 0.606 12.5 0.679 13.8

3.4 Prediction Results for the Coastal Place


Fig. 4, shows the predicted results of the coastal place in Himeji city. According to
Fig. 4, prediction models seemed to be able to predict the number of asthmatic attacks
in 2009. However, it is not able to predict the volatility of observations the results of
other years. Table 3 shows the predicted results of the coastal place in Himeji city of
CC and MAPE. Shown in Table 3, AR model obtained the highest total of CC=0.549.
On the other hand, Fuzzy-AR model obtained CC=0.483, prediction precision was
reduced when each prediction models were added fuzzy rules.
Moreover, total of CC of each prediction model was lower than that of the inland
place. Shown in Fig.3 and Fig. 4, the observed value in coastal place has volatility
and aperiodic change in comparison with that of inland place. Because AR model and
SARIMA model contain a autoregressive process, their prediction results are greatly
affected by sudden fluctuations in the past. For this reason, it is considered that
prediction in coastal place by time-series model is difficult.

1000
the number of asthmatic attacks(cases)

900
800
700
600
500
400
observed value
300
AR model
200 SARIMA model
Fuzzy-AR model
100
Fuzzy-SARIMA model
0
2008 2009 2010 2011 (year)
Fig. 4. Prediction results of the coastal place in Himeji city
Regional Analysis and Predictive Modeling for Asthmatic Attacks in Himeji City 83

Table 3. Predicted results of the coastal in Himeji city

AR SARIMA Fuzzy-AR Fuzzy-SARIMA


CC MAPE CC MAPE CC MAPE CC MAPE
2008 0.488 17.9 0.602 19.8 0.365 18.4 0.559 19.6
2009 0.792 10.0 0.618 10.1 0.825 10.0 0.621 9.6
2010 0.513 13.3 0.566 8.9 0.514 15.0 0.556 9.8
2011 0.403 14.2 0.359 17.2 0.229 15.0 0.099 17.6
total 0.549 13.9 0.536 14.0 0.483 14.6 0.459 14.1

4 Conclusion

In this study, we predicted the number of asthmatic attacks in Himeji city according to
region by the AR model, the SARIMA model, the fuzzy AR model, and the fuzzy
SARIMA model. As results of each place, it was able to predict with adequate
precision, despite difficulty of the prediction of elderly person [2]. Especially the
prediction result in the inland place got enough precision. SARIMA model obtained
the highest total of CC=0.733 in inland place. On the other hand, the prediction result
in the coastal place didn’t get enough precision. AR model obtained the highest total
of CC=0.549 in coastal place. Thus the possibility that asthmatic attack was caused by
difference reason according to each place was suggested. Therefore, this study
showed validity of prediction according to region.
In the future, we will need to figure out causes of asthmatic attack for each region.

Acknowledgements. This research has been supported in part by grant program of


Himeji city.

References
1. Kaku, Y., Kuramoto, K., Kobashi, S., Hata, Y.: Asthmatic attacks prediction considering
weather factors based on Fuzzy-AR model. In: Fuzz-IEEE, pp. 2023–2026 (June 2012)
2. Kaku, Y., Kuramoto, K., Kobashi, S., Hata, Y.: Predict time series data for the number of
asthmatic attacks in Himeji by Fuzzy-AR model. In: Proc. of 2012 Fifth Int. Conf. on
Emerging Trends in Engineering and Technology, pp. 314–317 (2012)
3. Himeji city hall home page, http://www.city.himeji.lg.jp/
4. Wang, J., Zhang, T.: Degradation prediction method by use of autoregressive algorithm.
In: IEEE ICIT, pp. 1–6 (April 2008)
5. Bennett, F.M., Christini, D.J., Ahmed, H., Lutchen, K., Hausdorff, J.M., Oriol, N.: Time
series modeling of heart rate dynamics. In: Computers in Cardiology, p. 273 (September
1993)
6. Gersch, W., Brotherton, T.: AR model prediction of time series with trends and
seasonalities: A contrast with Box-Jenkins modeling. In: Decision and Control Including
the Symposium on Adaptive Processes, vol. 19, p. 988 (December 1980)
84 S. Kikuchi et al.

7. Yabuuchi, Y., Watada, J., Toyoura, Y.: Fuzzy Ar Model of Stock Price. Scientiae
Mathematicae Japonicae Online 10, 485–492 (2004)
8. Palaniappan, R., Reveendran, P., Nishida, S., Saiwaki, N.: Evolutionary fuzzy ARTMAP
for autoregressive model order selection and classification of EEG signals. Systems, Man,
and Cybernetics 5, 3682 (2000)
9. Liu, J., Mo, J., Pourbabak, S.: Human cardiovascular system identification and application
using a hybrid method of auto-regression and neuro-fuzzy inference systems. Machine
Learning and Cybernetics 7, 4107 (2004)
10. Chen, B.S., Peng, S.C., Wang, K.C.: Traffic modeling, prediction, and congestion control
for high-speed networks: a fuzzy AR approach. IEEE Transaction on Fuzzy Systems 8(5),
491–508 (2000)
11. Watanabe, N.: A fuzzy rule based time series model. Fuzzy Information 2, 936 (2004)
12. Akaike, H.: A new look at the statistical model identification. IEEE Transactions 19(6),
716–723 (1974)
13. Shibata, R.: Selection of the order of an autoregressive model by Akaike’s information
criterion. Biometrika 63(1), 117–126 (1975)
14. Himeji city Medical Association home page, http://www.himeji-med.or.jp/
15. Japan Meteorological Agency home page,
http://www.jma.go.jp/jma/index.html
Analysis of 3D Polygon Data for Comfortable
Grip Form Design

Yuji Sasano1 , Hiroharu Kawanaka1, Kazuyoshi Takahashi1,2 , Koji Yamamoto3 ,


Haruhiko Takase1 , and Shinji Tsuruoka4
1
Graduate School of Engineering, Mie Univ., 1577 Kurima-Machiya,
Tsu, Mie 514-8507, Japan
2
Banzai Factory Inc., 69 Michinoue Yonesaki, Rikuzantakata, Iwate 029-2206, Japan
3
Suzuka Univ. of Medical Science, 1001-1 Kishioka, Suzuka, Mie 510-0226, Japan
4
Graduate School of Regional Innovation Studies, Mie Univ., 1577 Kurima-Machiya,
Tsu, Mie 514-8507, Japan
sasano@ip.elec.mie-u.ac.jp

Abstract. Recently, many methodologies for industrial designing con-


sidering usability have been widely studied. For example, Banzai Factory
Inc. has developed a tailor-made cup with curves just fitted to each per-
son’s grip form, which is called “Waga-Hai”. In the manufacturing pro-
cess of “Waga-Hai”, the person’s grip form is converted to 3D polygon
data. Thus we believe that these data have some important informa-
tion to make mathematical models for determining a comfortable grip
form. In this paper, we developed the method using 3D image processing
techniques to extract some features, i.e. positions/directions of fingers
and relationships among them, from the 3D polygon data. The obtained
results showed that gripping trends could be categorized into 5 classes
and the obtained features would be one effective for the mathematical
models.

Keywords: Waga-Hai, Features Extraction, Comfortability of Gripping,


3D Polygon Data, Trend Analysis.

1 Introduction

Gripping is a very important function in daily life and there will be the best grip
form for each person. If the form is the best, then it will contain some information
about the physical and mental condition of the person, and there might be some
common mathematical models by which we can describe the best form. In this
paper, we call this “Universal Design”. Thus research about Universal Design
provides a big challenge for us to develop a common way of designing tools (e.g.
cups, handles) best suited to a handicapped person and an elderly person. For
this objective, Takahashi and his colleagues have developed a cup for people
who cannot use a standard one easily [1]. They had a person grip a clay mold
to show the best form, which was then converted to 3D polygon data. From the
3D polygon data, a cup called “Waga-Hai” was created. We believe that the

Y.S. Kim et al. (eds.), Advanced Intelligent Systems, 85


Advances in Intelligent Systems and Computing 268,
DOI: 10.1007/978-3-319-05500-8_9,  c Springer International Publishing Switzerland 2014
86 Y. Sasano et al.

key information for deriving the models for Universal Design will be contained
in the 3D polygon data. With such motives, we are now trying to derive the
mathematical models to manufacture products with comfortable grip forms for
everybody.
For example, Kawanaka et al. proposed the feature extraction method from
the 3D polygon data of Waga-Hai [2]. In literature [2], the distal phalanx regions
of fingers were extracted and grip force vectors about each finger were obtained
numerically. Hirata et al. proposed the extraction method of distal phalanx re-
gions considering concave patterns on the clay [3]. Takahashi et al. discussed the
distributions of the grip forces obtained by using Hirata’s method [4]. In these
literatures, the distal phalanx regions were extracted from the 3D polygon data,
and the obtained features were also discussed. We are still now thinking that the
data have other effective features to express comfortability of gripping as well as
them, thus the analytical methods for other features would be required for the
mathematical models.
In this paper, we propose a method to extract new features from the 3D
polygon data and discuss their effectiveness. As the first step of this study, we
focus on the shapes of fingers and the relationships among them. The method
using the 3D image processing techniques is proposed to extract new features.
The extracted features are used for trend analysis using a clustering method.

2 Materials
Figure 1 shows the outline of manufacturing of Waga-Hai. In the molding process,
a wooden cup covered with clay is used (Fig. 1(a) and (b)). People who order
Waga-Hai grip the clay mold like Fig. 1(c). Then the mold is laser-scanned in
0.2 mm pitch to make a 3D polygon data (Fig. 1(d) and (e)). Using these data, a
computer system precisely controls the chipper machine to carve out a cup with
the same as outer structure of the 3D polygon data from a wood (Fig. 1(f)).
Finally, Waga-Hai is created after the process of Japanese lacquering work (Fig.
1(g)).
The 3D polygon data obtained by the molding process is given as a STL file
[5]. Generally, the surface of the 3D object is composed of many small triangles,
and each triangle is expressed by x, y and z positions of the three apices of the
triangle and the normal direction vector based on right hand rule. In the case of
the 3D polygon object shown in Fig. 1(e), the total number of polygons is more
than 150,000. In this paper, 30 persons’ 3D polygon data provided by Banzai
Factory Inc., all of them are right-handers, are used as experimental materials.

3 Method
3.1 Unrolling 3D Polygon Object
For extracting gripping features from the 3D polygon data and analyzing them,
determining the regions of fingers is required. It is, however, very difficult to
Analysis of 3D Polygon Data for Comfortable Grip Form Design 87

Fig. 1. Manufacturing Process of Waga-Hai

analyze the 3D polygon data directly because STL data consists only of a set
of polygon information. Fortunately, the 3D polygon object can be treated in
cylindrical coordinates. To obtain the relationships and determine the location
of each finger, we first unroll the 3D polygon object into a 2D image like Fig. 2.
Figure 3 shows the outline of the unrolling method. The 3D polygon data
are divided into the small square regions first, and each region corresponds to
each pixel on the converted 2D image. The intensity of each pixel is determined
considering the polygons in the divided small region. In Fig. 3, there are 6
polygons in the region A, and the intensity of the region is determined by using
the average depth of dents generated by gripping. The depth of dent generated
by gripping D(θ, z) is calculated by

D(θ, z) = Rref (z) − r(θ, z). (1)

In the above formula, Rref (z) means the distance between the z axis and the
surface of clay, at the arbitrary point z. In this study, we use the following
formula to approximate the shape of clay without molding.

Rref (z) = az 4 + bz 3 + cz 2 + dz + e (2)

To determine the values of the coefficients in the formula, we select only the
region without molding, and multiple regression analysis are employed. When
there is no polygon in the small region B, Ray Tracing technique is used to
determine the intensity of the pixel [6][7]. Figure 4 illustrates the outline of Ray
Tracing technique. In the figure, an arbitrary point vector p = (x y z)T on the
88 Y. Sasano et al.

Fig. 2. Unrolling of 3D Polygon Object

Fig. 3. Unrolling Process

straight line l that passes through a point s = (0 0 z1 )T and has direction vector
d = (cos θ sin θ 0)T can be expressed as follows.
⎛ ⎞ ⎛ ⎞
0 cosθ
p = s + td = ⎝ 0 ⎠ + t ⎝ sinθ ⎠ (3)
z1 0

In the above formula, t is a parameter of the line. The parameter t is calculated


by

n·d
t= (t > 0). (4)
(a − s) · n

By using the above formulas, the point at the intersection of the line l with
the plane is obtained. In the above formula, a, b and c are the position vectors
of apices and n is the normal vector of the plane. After this, outer products
(p − a) × (b − a), (p − b) × (c − b) and (p − c) × (a − c) are calculated. The
obtained intersection point is judged by checking the direction of each vector. If
the directions of each vector are same as the normal direction of the polygon,
Analysis of 3D Polygon Data for Comfortable Grip Form Design 89

Fig. 4. Ray Tracing Technique for Unrolling 3D Object

Fig. 5. Unrolled 2D Image of Waga-Hai

the intersection point is located in the polygon. The above procedure is applied
to all small regions.
Figure 5 shows an example of unrolled image. In the obtained image, the x
and y directions correspond to the θ and z directions of the original cylindrical
coordinates, respectively. As you can see, the intensity on each pixel reflects the
shape of the clay on the cup by using the above method.

3.2 Trend Analysis of Gripping

After unrolling, features of fingers are extracted from the obtained image. In
this paper, we suppose that each phalanx region of the finger, i.e. proximal pha-
lanx, middle phalanx and distal phalanx region, can be approximated by ellipse.
We developed the feature extraction tool for the unrolled image, and manually
90 Y. Sasano et al.

Fig. 6. Ellipse Approximation of Phalanx Regions

Fig. 7. Positional Relationships between Thumb and other Fingers

approximated these regions by elliptical objects like Fig. 6. In this figure, the
digits on the elliptical object mean “finger types (1: thumb, 2: index finger...)”
and “phalanx types (1: distal phalanx, 2: middle phalanx, 3: proximal phalanx)”,
respectively. Next, the polygon data corresponding to the center and the both
end-points of the elliptical objects are extracted. After this, directional vectors
that indicate distances between the thumb and other fingers (d12 , d13 , . . . , d15 )
shown in Fig. 7(a) are calculated. We also calculate angles between the direc-
tional vector of the thumb and those of other fingers (Fig.7(b)). The directional
vector of the thumb is obtained by using the line passing the both end points of
the ellipse “1-1” (the red line in Fig. 7(b)).
To discuss their trends of gripping, we classified the 3D polygon data in some
clusters by using a clustering method. In this process, the extracted features are
normalized and used as elements of the feature vectors as follows.

(d12 , θ12 , d13 , θ13 , d14 , θ14 , d15 , θ15 ) (5)

This paper uses a hierarchical clustering using Ward method and Euclidean
distance [8].
Analysis of 3D Polygon Data for Comfortable Grip Form Design 91

Fig. 8. Clustering Result

4 Experimental Results and Discussions


4.1 Results
We conducted experiments using 30 persons’ polygon data for trend analysis.
Figure 8 shows the obtained clustering result. As you can see, the gripping object
data can be classified into 5 clusters. In this section, we call these clusters A, B,
C, D and E from the left, sequentially.
Figure 9 shows examples of grip forms (unrolled images) that belong to each
cluster. In the case of Fig. 9(a), the thumb faced the index finger, and the dis-
tances between the thumb and each finger were large. In other words, these
images indicated that his/her hand size was quite small compared with the grip-
ping object (i.e. clay). When a person could not grip the object tightly, the
vertical position of the thumb was near to that of the index finger, as a result
such grip forms were classified into this cluster. In the case of Fig. 9(b), most of
grip forms have the following tendency; the thumb was located at the front of
the index/middle finger and the distances between the thumb and each finger
were very small. These images mean that the hand size was enough large for
gripping the object tightly, thus the distances became small and the position of
the thumb was near to these fingers. In cluster C, there were many grip forms,
and most of them had the same kind of tendency as the grip forms in cluster B.
The distances between the thumb and other fingers were, however, not so small
compared with those in cluster B. For the reason such results were obtained, it
is considered that his/her hand size was large for gripping the object but not
so enough to grip it tightly. In the case of Fig. 9(d), most of thumbs direct to
the middle fingers. As a result of discussions, it was clarified that his/her hand
size was not so large for gripping, and when some of them gripped the object, the
92 Y. Sasano et al.

(a) Cluster A (b) Cluster B

(c) Cluster C (d) Cluster D

(e) Cluster E

Fig. 9. Examples of each Cluster’s Gripping Objects

thumb faced to the middle finger. In the case of cluster E, most of users gripped
the object like cluster D, but the distances between the thumb and other fingers
were not so large.

4.2 Discussions
By using the proposed method, 30 polygon data of Waga-Hai could be classified
into 5 clusters. From the obtained results, we summarized that each cluster was
classified based on the condition shown in Table 1. Thus it is considered that
Analysis of 3D Polygon Data for Comfortable Grip Form Design 93

these extracted features have some key information to classify into some clusters.
In addition, we believe that other 3D polygon data stored in the database also
can be classified into some clusters by using these features. We also do that the
classified 3D polygon data have common tendencies and these would become a
good feature descriptor for Universal Design.
On the other hand, some of gripping objects were not similar to others in the
same cluster. For example, Fig. 10 shows examples of gripping objects classified
into cluster C. As you can see, only the boundaries of distal phalanx regions
were clear in the case of No. 27. On the other hand, other gripping objects

Table 1. Condition of Classification


XX
XXX Clusters
XXX A B C D E
Features X
Thumb Near to Near to Index Near to Index Middle Middle
Directions Index Finger /Middle Finger /Middle Finger Finger Finger
Distances between
Thumb and Other Fingers Large Small Large Large Small

Fig. 10. Examples of Dissimilar Cases in Cluster C

(No. 5, 21, 23) did not have such feature. In these cases, the boundaries of
fingers were clear. For the reason such results were obtained, it is considered
that the distances between the thumb and other fingers were only employed as
features. Therefore, additional features such as depth, shape of each finger etc.
would be required to classify the gripping objects accurately.

5 Conclusion and Future Works


In this paper, we discussed the method that extracts new features from the 3D
polygon data of Waga-Hai and the effectiveness of them. This paper focused
94 Y. Sasano et al.

on shapes of fingers and the relationships among them. The method using 3D
image processing techniques was proposed to extract new features from the 3D
polygon data. As a result of experiments using 30 persons’ polygon data, these
could be classified into 5 clusters. It was also indicated that the classified data
had common key information for the Universal Design.
As future works of this study, additional trend analyses considering other fea-
tures have to be required for determining the mathematical model for Universal
Design. Furthermore, we also have to make prototypes of gripping objects, i.e.
cups, for practical experiments in welfare facilities.

References
1. Official Web Site of Banzai Factory Inc., http://www.sagar.jp/
2. Kawanaka, H., Yamamoto, K., Takahashi, K., Suzuki, K.: Feature Extraction and
Visualization from 3D Polygon Data for Determining a More Comfortable Grip
Form. Intl. J. of Innovative Computing Information and Control 7(5(B), 3017–3018
(2011)
3. Hirata, T., Takahashi, K., Kawanaka, H., Yamamoto, K., Takase, H., Tsuruoka, S.:
A Study on Extraction Method of Distal Phalanx Regions from 3D Polygon Data for
Determining a More Comfortable Grip Form. In: Proc. of the 12th Intl. Symposium
on Advanced Intelligent Systems, pp. 184–187 (2011)
4. Takahashi, K., Kawanaka, H., Hirata, T., Yamamoto, K., Suzuki, K., Takase, H.,
Tsuruoka, S.: A Study on 3D Polygon Data Analysis and Designing Method for De-
termining a More Comfortable Grip Form. The Japanese Journal of Ergonomics 48,
406–407 (2012)
5. Official Web Site of 3D Systems Corporation, http://www.3dsystems.com/
6. Neri, E., Caramella, D., Bartolozzi, C.: 3D Image Processing: Techniques and Clin-
ical Applications. Springer (2002)
7. Shirley, P., Keith Morley, R.: Realistic Ray Tracing, 2nd edn. A K Peters Ltd. (2008)
8. Ward, J.H.: Hierarchical Grouping to Optimize an Objective Function. Journal of
the American Statistical Association 58(301), 236–244 (1963)
Blood Pressure Estimation System
by Wearable Electrocardiograph

Tatsuhiro Fujimoto1, Hiroshi Nakajima2, Naoki Tsuchiya2, and Yutaka Hata1,3


1
Graduate School of Engineering, University of Hyogo, Hyogo, Japan
2
Technology and Intellectual Property H.Q. OMRON Corporation, Kizugawa, Japan
3
WPI Immunology Frontier Research Center, Osaka University, Osaka, Japan

Abstract. This paper proposes a blood pressure estimation system based on


electrocardiogram (ECG). The ECG is unconstraintly measured by wearable
sensor. The sensor provides acquired data to personal computer by wireless
communication. For estimation, the system extracts heart rate, R-T intervals
from the ECG. The heart rate is calculated from R-R intervals, and R-T
intervals are extracted based on fuzzy logic. Form the information and body
composition information of subject, the system estimates mean blood pressure.
In our experiment, we employed six subjects, and estimated their mean blood
pressure. As a result, our proposed method estimated the blood pressures with
low estimation errors and high correlation coefficients.

Keywords: blood pressure, wearable sensor, electrocardiogram, fuzzy logic.

1 Introduction

Recently, lifelog services are receiving considerable attention in medical and health
management. The lifelogging for health management records biological information
such as blood pressure, body weight and activity [1]-[6]. Furthermore, it assists self-
health management with graphs showing these biological data trend. For example, we
can improve lifestyle by recognizing trends of health status change from past
information. By using the lifelog services, we are able to keep motivation of health
management, and detect some abnormal health conditions in the early stage. In
addition, it enhances medical advice by referring from the past trend. Thus, the lifelog
services are useful for disease prevention and lifestyle improvement. For these
purpose, it is important to keep logging biological information in a day.
As one example of ligelogging, OMRON Healthcare Co., Ltd. has provided
“WellnessLINK” since 2010. In this service, record biological data such as body
weight, blood pressure, exercise and so on. In addition, a service to predict future
body weight from past body weight has been developed [3]. Moreover, many internet
lifelog services are provided by many organizations for healthcare. In these services,
biological data are measured only once or twice a day. However, on some biological
information such as exercise and blood pressure, it is important to measure dynamic
change in a day. For exercise, several wearable sensors are developed. For example,

Y.S. Kim et al. (eds.), Advanced Intelligent Systems, 95


Advances in Intelligent Systems and Computing 268,
DOI: 10.1007/978-3-319-05500-8_10, © Springer International Publishing Switzerland 2014
96 T. Fujimoto et al.

SoftBank HealthCare will provide “fitbit flex” [7]. This sensor measures body
movement, and calculates calorie consumption, exercise and so on. For activity
monitoring, we developed a human activity estimation system by “RF-ECG” (Micro
Medical Device Inc.) [8]. RF-ECG consists of built-in electrocardiogram and triaxial
accelerometer. This system estimate human activity by electrocardiogram (ECG) and
acceleration data. By using this system, we are able to enhance lifestyle easily.
However, blood pressure monitoring system is not developed.
Blood pressure is constantly changing under the influence of many factors such as
stress, emotion, exercise and so on. These factors that mentioned above also affect the
ECG signal. Thus, it is important to measure dynamic change on blood pressure by
recording ECG signal. Hassan et al. [9] developed portable monitoring kit based on
ECG signal. This system estimates blood pressure from R-R intervals by using neural
network. However, this system requires over 1,200 measured blood pressures for a
person in learning process. It is not practical for home use system. And this device is
too big to wear. Thus, we need the system which can estimate blood pressure more
easily.
In this study, we propose a wearable blood pressure estimation system based on
ECG signal. We employ a wearable multi-sensor “RF-ECG”. In medically, blood
pressure is calculated by the product of cardiac output and vascular resistance [10]. In
addition, the cardiac output is represented by product of stroke volume and heart rate.
In ECG signals, Q-T interval is rerated to stroke volume, and heart rate is calculated
from R-R intervals. We extract these factors by peak point detection and fuzzy logic
[11]. Furthermore, we assume the vascular resistance has relation with body mass
index (BMI) of the subject. We estimate mean blood pressure (MBP) in each factor
from the trend of learning data. MBP is a combined factor of nature of systolic blood
pressure and diastolic blood pressure. In our experiment, we employed six volunteers
and estimate their blood pressure. The proposed method estimated them with high
accuracy. Moreover, our method achieved development of training data less system.

2 Preliminaries

In this section, we describe the experimental system. Fig. 1 shows multi-sensor


system “RF-ECG”. The system consists of built-in electrocardiograph and three
dimensional accelerometers. The internal electrocardiograph acquires ECG signal.
Size of the sensor is 40 mm × 35 mm × 7.2 mm. Fig. 2 shows the system set up. In
this system, subject wears the sensor at his/her left thorax. The ECG signal is
provided to personal computer by wireless communication. Sampling rate of data
acquisition is 256 Hz. Fig. 3 shows the PQRST-wave of ECG signal. As shown in
Fig. 3, a one cycle of ECG signal consists of P, Q, R, S and T-waves. We measure
truth blood pressure by sphygmomanometer (OMRN Corporation, HEM-7081-IT)
shown in Fig. 4. Sphygmomanometer acquires blood pressure in the range from 0 to
299mmHg.
Blood Pressure Estimation System by Wearable Electrocardiograph 97

35mm

Fig. 1. Size of the sensor

Multi sensor
Personal Computer

Receiver

Fig. 2. ECG signal acquisition system

0.8 R
Amplitude [mV]

0.4
T
P
0.0
Q
-0.4
S
-0.8
29.8 30.0 30.2 30.4 30.6
Time [sec]

Fig. 3. PQRST-wave
98 T. Fujimoto et al.

Fig. 4. Sphygmomanometer

3 Blood Pressure Estimation Method

3.1 Outline of the Procedure

Fig. 5 shows procedure of our proposed method. In the pretreatment process, the
method performs denoising to acquired ECG signal, and the ECG signals is
normalized by maximum and minimum amplitudes. From this normalized signal, we
detect R-wave and T- wave. The R-wave is detected by peak point detection method.
T-wave is detected by fuzzy logic with interval and amplitude information. Finally,
we estimate blood pressure value from these detected waves and body composition
information.

Start

Step 1. Pretreatment ECG signal

Step 2. R-wave detection

Step 3. T-wave detection

Step 4. Estimation of blood pressure

End
Fig. 5. The flowchart of the proposed method
Blood Pressure Estimation System by Wearable Electrocardiograph 99

3.2 Pretreatment

Fig. 6 shows an example of acquired ECG signal. Fig. 6 (a) does not have a noise
element. However, ECG signal may have noises like Fig. 6 (b). These noises are
caused by sweating and exercise.

1.0 0.8
0.8

Amplitude [mV]
Amplitude [mV]

0.6
0.6
0.4 0.4
0.2 0.2
0.0 0.0
-0.2
-0.4 -0.2
-0.6 -0.4
20 25 30 35 20 25 30 35
Time [sec] Time [sec]

(a) ECG signal (b) ECG signal that include a noise


Fig. 6. Acquired ECG signal

It is necessary to remove noises from ECG signal. We employ band-pass filter to


denoising, because as shown in Fig. 6 (b), ECG has noises at high-frequency band and
low-frequency band. Fig. 7 shows denoising process. In this process, we apply band-
pass filter (1.0-50.0 Hz) to ECG signal. Next, we apply normalization process as
shown in Fig. 8. Normalization process is defined by (1).
Asrc ( t ) − min ( Asrc ( t ) )
A′ ( t ) = (1)
max ( Asrc ( t ) ) − min ( Asrc ( t ) )

Here, the notation Asrc(t) and t denote an acquisition ECG signal and acquisition time,
respectively. A’(t) denotes a normalized ECG signal.

0.8 0.8
Amplitude [mV]

Amplitude [mV]

0.6 0.6
0.4 0.4
0.2 0.2
0.0 Denoising 0.0
-0.2 -0.2
-0.4 -0.4
20 25 30 35 20 25 30 35
Time [sec] Time [sec]

Fig. 7. Denoising process


100 T. Fujimoto et al.

Amplitude [mV] 0.8 1.0


0.6 0.8
0.4
0.6
0.2
0.0 Normalization 0.4
-0.2 0.2
-0.4 0.0
20 25 30 35 20 25 30 35
Time [sec] Time [sec]

Fig. 8. Normalization process

3.3 R-Wave Detection


Fig. 9 shows R-wave and R-R interval. The R-wave is defined as a peak of ECG. In
general, R-R interval is an interval in time of R-waves. The R-waves are detected by
peak point detection in ECG signal. By using the R-R interval, heart rate (HR) is
calculated by (2). We employ HR as one of the factor for blood pressure.
1
HR = [bpm ] (2)
R − R interval

R wave
1
Amplitude [mV]

0.8
0.6
0.4
0.2
0
15 16 17 18
R-R interval Time [sec]

Fig. 9. R-wave and R-R interval

3.4 T-Wave Detection


Fig. 10 shows Q-wave and T-wave. In general, Q-T interval shows period of systole.
Thus, Q-T interval affects blood pressure directly. For example, if Q-T interval
becomes long, cardiac output decreases. However, Q-wave may disappear from ECG
signal like Fig. 10 by the bad situation. Therefore, in order to ensure the accuracy of
period of systole, we employ R-T interval shown in Fig. 11 instead of Q-T interval as
one of the factor of blood pressure.
Blood Pressure Estimation System by Wearable Electrocardiograph 101

T wave
1

Amplitude [mV]
0.8
0.6
0.4
0.2
0
15 Q wave 16 17 18
Time [sec]
Fig. 10. Q-wave and T-wave

1
Amplitude [mV]

0.8 Starting T-wave ending T-wave


0.6
0.4
0.2
0
15.2 15.4 15.6 15.8 16.0 16.2
R-T interval Time [sec]

Fig. 11. R-T interval

We detect ending time of T-wave by fuzzy logic [11]. In ECG signal, the T-waves
appear after R-wave and before P-wave. In addition, ECG signals have a large carved
signal from starting to ending time of T-wave. Furthermore, ECG signals calm down
in intervals from T-wave to P-wave. From these facts, we obtain following
knowledge.
Knowledge 1: T-waves exist after R-waves.
Knowledge 2: T-wave is a large carved signal.
Knowledge 3: ECG signals calm down after T-waves.
For detection, we consider some indexes for a reference time τ. Here, the reference
time τ is a candidate of ending time of T-wave. An interval time I(τ) is defined by (3)
represents an interval between time of immediately before R-wave tR and the
reference time. Two sum of absolute variations DB(τ) and DA(τ) are calculated to
evaluate the large and little variations, respectively. Here, the DB(τ) is calculated from
absolute variations of interval from the reference time τ to the window shift time tw by
(4). And the DA(τ) is calculated by (5) as well.
102 T. Fujimoto et al.

I (τ ) = τ − t R [sec] (3)

{ }
tw
DB (τ ) =  AN (τ − t ) − AN (τ − t + 1) (4)
t =0

{ }
tw
D A (τ ) =  AN (τ + t ) − AN (τ + t + 1) (5)
t =0

From these knowledge, the following fuzzy IF-THEN rules are derived.
Rule1: IF an interval I(τ) between the R-wave and a reference time is CLOSE to
existence period of T-wave,
THEN a fuzzy degree μI(τ) is high.
Rule2: IF sum of absolute variations DB(τ) is HIGH
THEN degree for before points μbp(τ) is high.
Rule3: IF sum of absolute variations DA(τ) is LOW
THEN degree for after points μap(τ) is high.
From these rules, fuzzy membership functions are constructed as shown in Fig. 12.
Here, fuzzy parameters t1, t2, t3, t4, b1, b2, a1 and a2 are set the values that are
obtained maximum precision for our blood pressure database. The fuzzy degrees μI(τ),
μbp(τ) and μap(τ) are calculated by (6), (7) and (8), respectively.

(
μ I (τ ) = min CLOSE , S I (τ ) ( t ) ) (6)

(
μ bp (τ ) = min HIGH , S D B (τ ) ( SAV ) ) (7)

μ ap (τ ) = min ( LOW , S D A (τ ) ( SAV ) ) (8)

The fuzzy singleton functions SI(τ)(t), S DB (τ ) ( SAV ) and S DA (τ ) ( SAV ) are defined
by (9), (10) and (11) respectively.
1 if t = I (τ )
S I (τ ) ( t ) =  (9)
 0 otherwise
1 if SAV = DB (τ )
S DB (τ ) ( SAV ) =  (10)
 0 otherwise

1 if SAV = D A (τ )
S DA (τ ) ( SAV ) =  (11)
 0 otherwise
We calculate fuzzy degree μT(τ) for the reference time τ by (12). We calculate the
fuzzy degree for every reference among the R-R interval. We detect a reference time
with the highest fuzzy degree μT as the ending time of T-wave, tT. And we calculate R-
T interval by (13).
μT (τ ) = μ I (τ ) × μ bp (τ ) × μ ap (τ ) (12)

R − T interval = tT − t R [sec] (13)


Blood Pressure Estimation System by Wearable Electrocardiograph 103

degree
SI(τ)(t)
1.0
CLOSE

μI(τ)

0.0 t [sec]
tR I(τ) t1 t2 t3
(a)Fuzzy membership function of time
degree degree
S DB (τ ) ( SAV ) S DA (τ ) ( SAV )
1.0 1.0
HIGH

μbp(τ) μap(τ)

LOW
0.0 SAV 0.0 SAV
b1 DB(τ) b2 a1 DA(τ) a2
(b)Fuzzy membership function of before (c)Fuzzy membership function of after
amplitude amplitude
Fig. 12. Fuzzy membership functions

3.5 Estimation of Blood Pressure

We calculate MBP from HR, R-T interval and BMI. Here, BMI is calculated by (14).
In general, MBP is defined by (15). Here, the notation SV denotes cardiac stroke
volume. And VR is peripheral vascular resistance. From (15), SV, HR and VR are
proportional to the MBP. In medically, R-T interval bears an inverse relation to SV. In
addition, BMI bear a proportionate relationship to VR. From these facts, we define
calculation formula to (16). Here, BPHR, BPRT and BPBMI are defined as Fig. 13, Fig.
14 and Fig. 15, respectively. In Fig. 13 and Fig.15, αHR, βHR, αBMI and βBMI are
determined by least-square method from our blood pressure database. Here, the
notation BPlearn denotes blood pressure of our database. In Fig. 14, bp1, bp2, rt1 and rt2
are set the values that are obtained minimum errors between calculated value and
blood pressure database. In (16), wHR and wRT are calculated by (17). Here, the
notation eHR and eRT denote errors between calculated value and blood pressure
database.
Weight[ kg ]
BMI = (14)
Hight[ m ] × Height[ m ]

MBP = SV × HR × VR (15)
104 T. Fujimoto et al.

MBP = wHR × BPHR + wRT × BPRT + BPBMI (16)

eRT
wHR =
eHR + eRT
(17)
eHR
wRT =
eHR + eRT
Blood Pressure on HR [mmHg]

×
BPHR = αHR HR + βHR

Heart Rate [bpm]

Fig. 13. Calculation BP on HR


[mmHg]

bp1
on R-T interval
Blood Pressure

BPRT

bp2

rt1 R-T interval rt2


R-T interval [sec]

Fig. 14. Calculation BP on R-T interval


Blood Pressure on BMI [mmHg]

BPBMI = αBMI ×BMI + β BMI – MEAN(BPlearn)

0.0

BMI [kg/m2]

Fig. 15. Calculation BP on BMI


Blood Pressure Estimation System by Wearable Electrocardiograph 105

4 Experimental Results
In our experiment, we employed six volunteers as shown in Table 1. We recorded
their blood pressures and ECG signals from these volunteers during three days. And
we measured their blood pressures at rest and after exercise. Fig. 16 shows relation
between our features and measured MBP. In Fig. 16, we confirmed the general trend
on blood pressure. We construct learning data for a subject from data of the other
volunteers.
Table 1. Subject information
Subject ID Weight [kg] Height [cm] BMI [kg/m2] Age
#1 177 94.4 30.1 24
#2 178 56.4 17.8 22
#3 164 90.4 33.6 21
#4 167 60.2 21.6 22
#5 165 73.9 27.1 24
#6 168 58 20.5 22

140
Blood Pressure [mmHg]

#1
120 #2
#3
100
#4

80 #5
#6
60
40 60 80 100 120
Heart Rate [bpm]
(a) Relation between heart rate and blood pressure
140
Blood Pressure [mmHg]

#1
120 #2
#3
100 #4
#5
80 #6

60
0 0.1 0.2 0.3 0.4 0.5 0.6
R-T interval [sec]
(b) Relation between R-T interval and blood pressure
Fig. 16. Relation between our features and blood pressure
106 T. Fujimoto et al.

Blood Pressure [mmHg] 140

#1
120
#2
#3
100
#4
#5
80 #6

60
10 15 20 25 30 35
BMI [kg/m2]
(c) Relation between BMI and blood pressure
Fig. 16. (continued)

We estimated their mean blood pressure. Fig. 17 shows relation between estimated
blood pressure and truth blood pressure. And Table 2 shows estimation result of mean
blood pressure. In this table, our proposed method obtained low estimation error and
high correlation. The mean estimation error was 4.96 mmHg. Because the
measurement accuracy of sphygmomanometer is ±3mmHg in specification sheet, we
consider our method had sufficient accuracy.

140

#1
#2
Truth Blood Pressure [mmHg]

120
#3
#4
#5
100
#6

80

60
60 80 100 120 140
Estimated Blood Pressure [mmHg]

Fig. 17. Relation between estimated and truth blood pressure


Blood Pressure Estimation System by Wearable Electrocardiograph 107

Table 2. Estimation result of mean blood pressure

Maximum Minimum
Mean error
Subject ID error error Correlation
[mmHg]
[mmHg] [mmHg]
#1 6.22±2.83 11.46 2.81 0.92
#2 4.68±5.56 17.06 0.46 0.75
#3 5.52±4.11 11.75 1.72 0.90
#4 3.74±2.95 9.13 0.18 0.49
#5 6.28±3.97 10.75 0.01 0.52
#6 3.30±4.08 10.31 0.41 0.93
Mean 4.96±1.26 11.74±2.76 0.93±1.10 0.75±0.20

5 Conclusion

We have proposed a wearable blood pressure estimation system. From ECG signal,
we calculated heart rate and R-T interval by peak point detection and fuzzy logic. Our
proposed method estimated mean blood pressure from heart rate, R-T interval and
subject’s BMI. In the experiment, we employed six subjects, and estimated their mean
blood pressure. As a result, our proposed method obtained low estimation error and
high correlation. We have confirmed that it is possible to predict with high accuracy
without training data of subject. Thus, this system helps the discovery of
dysarteriotony by using it for a long-time measurement.
In the future, in order to generalize the system, we will increase the age groups of
the subject. In addition, we will develop a system that combines our proposed method
and human activity estimation method.

References
[1] Continua Health Alliance home page,
http://www.continuaalliance.org/index.html
[2] Hong, Y., Kim, I., Ahn, S., Kim, H.: Activity Recognition using Wearable Sensors for
Elder Care. In: Proc. of 2008 FGCN 2008. Second Int. Conf. on Future Generation
Communication and Networking, pp. 302–305 (2008)
[3] Tanii, H., Nakajima, H., Tsuchiya, N., Kuramoto, K., Kobashi, S., Hata, Y.: A Fuzzy
Logic Approach to Predict Human Body Weight Based on AR Model. In: Proc. of 2011
IEEE Int. Conf. on Fuzzy Systems, pp. 1022–1025 (2011)
[4] Hata, Y., Yamaguchi, H., Kobashi, S., Taniguchi, K., Nakajima, H.: A Human Health
Monitoring System of Systems in Bed. In: Proc. of IEEE third Int. Conf. on System of
Systems Engineering. CD-ROM (2008)
[5] Hata, Y., Kobashi, S., Kuramoto, K., Nakajima, H.: Fuzzy Biosignal Detection Algorithm
and Its Application to Health Monitoring. International Journal of Applied and
Computational Mathematics 10(1), 133–145 (2011)
108 T. Fujimoto et al.

[6] Yamamoto, K., Kobashi, S., Hata, Y., Tsuchiya, N., Nakajima, H.: Real time autonomic
nervous system display with air cushion sensor while seated. In: Proc. of 2009 IEEE Int.
Conf. on Systems, Man and Cybernetics, pp. 1116–1121 (2009)
[7] SoftBank HealthCare homepage,
http://www.softbank.jp/mobile/service/
softbankhealthcare/
[8] Fujimoto, T., Nakajima, H., Tsuchiya, T., Marukawa, H., Kuramoto, K., Kobashi, S., Hata,
Y.: Wearable human activity recognition by electrocardiograph and accelerometer. In:
Proc. of IEEE 43rd Int. Symp. on Multiple-Valued Logic, pp. 12–17 (2013)
[9] Ali Hassan, M.K., Mashor, M.Y., Mohd Saad, A.R., Mohamed, M.S.: A Portable
Continuous Blood Pressure Monitoring Kit. In: 2011 IEEE Symposium on Business,
Engineering and Industrial Applications (ISBEIA), pp. 503–507 (2011)
[10] Sahoo, A., Manimegalai, P., Thanushkodi, K.: Wavelet Based Pulse Rate and Blood
Pressure Estimation System From ECG and PPG Signals. In: Computer, Communication
and Electrical Technology, pp. 285–289 (2011)
[11] Zadeh, L.A.: Fuzzy Sets and Applications. John Wiley and Sons, New York (1987)
A Fuzzy Human Model for Blood Pressure Estimation

Takahiro Takeda1, Hiroshi Nakajima2, Naoki Tsuchiya2, and Yutaka Hata1,3


1
Graduate School of Engineering, University of Hyogo
2167, Shosha, Himeji, Hyogo, 671-2280, Japan
takahiro_takeda@ieee.org
2
Technology and Intellectual Property H.Q., OMRON Corporation, Kizugawa, Japan
3
WPI Immunology Frontier Research Center, Osaka University, Osaka, Japan

Abstract. The paper describes a blood pressure prediction model. The model
predicts blood pressure of the subject based on trend of the blood pressure,
body weight and number of steps. To predict it, we make autoregressive (AR)
model, liner prediction model, body weight based prediction model and steps
based prediction model. These models are boosted by fuzzy logic. The fuzzy
degrees are calculated from mean absolute prediction error, correlation coeffi-
cient and variation amount for the learning data. In our experiment, we col-
lected blood pressure, body weight and number of steps of 453 subjects from
WellnessLINK which is an internet life-log service. Our proposed model pre-
dicted their blood pressures. The mean correlation coefficient between the pre-
dicted values and measurement systolic blood pressures was 0.895.

Keywords: Blood pressure, human model, fuzzy logic, big data, healthcare.

1 Introduction

Recently, lifestyle diseases have become big problem [1, 2]. The lifestyle diseases
cause cardiovascular event such as cerebral accident, cardiac infarction. The lifestyle
disease includes the diabetes, the metabolic syndrome and the high blood pressure.
For preventing the diseases, we need to pay attention to our lifestyle such as exercise,
eating and smoking. Therefore, it is important to manage our life by ourselves. H.
Nakajima et al. [3, 4] have propounded a health management technology. The health
management technology is constructed by “Measurement”, “Recognition”, “Estima-
tion” and “Evolution”. To support the management, several life-log services are
provided by the internet or mobile device. The life-log service records our daily in-
formation. For example, body competition, blood pressure, calorie consumption, sleep
and activities are recorded. For the body weight, predicted value is useful to make a
plan of the body weight control. H. Tanii et al. [5, 6] have proposed time-series pre-
diction method for body weight. However, because variation amount of blood pres-
sure is large and the blood pressure is complexly affected by several factors, it is
difficult to develop a prediction model.
The high blood pressure causes the arteriosclerosis. Moreover, the arteriosclerosis
courses cardiovascular event such as cerebral accident. They are the biggest killer in

Y.S. Kim et al. (eds.), Advanced Intelligent Systems, 109


Advances in Intelligent Systems and Computing 268,
DOI: 10.1007/978-3-319-05500-8_11, © Springer International Publishing Switzerland 2014
110 T. Takeda et al.

Japan. In general, the blood pressure is affected by body fat, age, gender, exercise,
sleep, temperature, mental stress and so on [7-11]. However, the effects of these fac-
tors depend on the individual. For example, even if you walk over 20,000 steps per
day, decrements of the blood pressure may be less than a person who walked 8,000
steps. By the reason, we need to find a major variation factor for each person. By
improving the lifestyle related to the major variation factor, we can improve our blood
pressure effectively.
This paper describes a personalized human model to predict blood pressure change.
We employ four types time-series prediction models based on past blood pressures,
body weight and number of steps. The body weight and the number of steps are varia-
tion factors of blood pressure. Generally said, heavy body weight increases the blood
pressure, and the good excise decreases the blood pressure. The four models are
boosted by fuzzy logic to predict blood pressure. The fuzzy if-then rules are derived
from mean absolute prediction error, correlation coefficient and variation amount for
learning data. In out experiment, we collected blood pressure, body weight and num-
ber of steps of 453 subjects from an internet life-log service. Our proposed method
predicted their blood pressure with 0.895 in correlation coefficient. Finally we con-
clude and discuss the technical results.

2 Blood Pressure Analysis

2.1 Dataset

In our study, biological data are collected by an internet life-log service (Wellnes-
sLINK, OMRON Healthcare, Co., Ltd. [12]). The WellnessLINK has started on 2010,
and it has over 320,000 registers at 2013. From the service, we extract 453 blood
pressure dataset which match the following criteria: 1) the data has over 300 days on
blood pressure data, 2) the data has all of blood pressures, body composition and ex-
ercise, 3) the register did not take medicine. Table 1 shows age distribution of col-
lected data. Table 2 shows a structure of the data set. Figure 1 and Figure 2 shows
examples of blood pressure change data. From the Figure 1, we can see that blood
pressures have been decreasing among the measurement period. On the other hand,
Figure 2 shows periodical changes. The changes of systolic blood pressure (SBP) and
diastolic blood pressure (DBP) are similar.

Table 1. Subject age and gender infomations

Age
20-29 30-39 40-49 50-59 60-69 70-79 80-89 Total
[years]
Male
6 60 144 101 64 12 1 388
[person]
Female
1 7 24 17 13 3 0 65
[person]
A Fuzzy Human Model for Blood Pressure Estimation 111

Table 2. Structure of data set

Data type Item


Subject information Age / Birthday
Gender
Prefecture
Blood pressure Measurement date and time
Systolic blood pressure in morning / evening
Diastolic blood pressure in morning / evening
Heart rate at the measurement
Body competition Measurement date and time
Body weight in morning / evening
Body mass index (BMI) in morning / evening
Body fat percentage in morning / evening
Exercise Measurement date
Number of steps in a day

180
SBP In morning
DBP in morning
160 SBP in evening
DBP in morning
Blood pressure [mmHg]

140

120

100

80

60
2010/8/10 2010/11/18 2011/2/26 2011/6/6 2011/9/14 2011/12/23 2012/4/1 2012/7/10 2012/10/18 2013/1/26
Measurement date

Fig. 1. An example decreasing blood pressure change

SBP In morning
160 DBP in morning
SBP in evening
140 DBP in morning
Blood pressure [mmHg]

120

100

80

60

40
2011/1/7 2011/4/17 2011/7/26 2011/11/3 2012/2/11 2012/5/21 2012/8/29 2012/12/7
Measurement date

Fig. 2. An example periodical changed blood pressure change


112 T. Takeda et al.

2.2 Data Analysis

To estimate variation factor of blood pressure, we analyze the collected biological


data statistically. We smooth the collected data with 30-day moving average method.
We calculate correlation coefficient between each item, and they are shown in Table
3. From this table, we confirmed the high correlations between the systolic blood
pressures on the morning and evening. Similarly, diastolic blood pressures on the
morning and evening had high correlation. In addition, we can see that weak correla-
tion between the blood pressures and body weight. On the other hand the number of
steps did not correlate to the blood pressure.
Table 4 shows typical subject with ac characteristic blood pressure change. Figures
3, 4, 5, 6 and 7 show their systolic blood pressure, body weight and number of steps.
In subject #A and #B, we confirmed positive correlation between blood pressure and
body weight. In addition, blood pressure of subject #B correlated with number of
steps. Expressly, among 150 days from measurement starting, his blood pressure was
decreasing with the increasing of steps. In subject #C, we can see that his blood pres-
sure weakly correlated to these factors. However, his blood pressure decreased sharp-
ly at around 2012/2/11. Before the sharply change, the steps increased sharply. We
consider the blood pressure decreased with the increase in number of steps. In subject
#D, we cannot confirm the correlations between the blood pressure and the other fac-
tors. Moreover, the blood pressure of subject #E and body weight were related to
negative correlation, and the blood pressure and the step were restated to positive
correlation. From these facts, we consider that valuation factors are different indivi-
dually. Thus we need to develop a personalized human body model for blood pressure
estimation.
We analyzed a variation amount of morning systolic blood pressure in a day. The
variation ΔBP(t) is calculated by Equation (1).

ΔBP ( t ) = BP ( t ) − BP ( t − 1) [mmHg] (1)

Here, the notation BP(t) is a measured morning systolic blood pressure, and t is the
measurement day. Figure 8 shows a histogram of the variations for all subjects. The
red line in the figure shows a normal distribution N( , σΔBP). The notation
and σΔBP denote a mean value and standard deviation of the variations. Moreover, we
confirmed the histograms of each person. Form the results, we assume that the varia-
tion amount ΔBP(t) obeys the normal distribution.

3 Fuzzy Human Model

We develop a fuzzy human model to estimate variation factors for a subject. The
model is personalized and developed from biological data of the subject. In our study,
the model is consists of several time-series prediction models. We employ autoregres-
sive (AR) model, liner prediction model, body weight and steps based prediction
models as the time-series prediction model. Fuzzy logic combines these prediction
models, and estimates variation factor.
A Fuzzy Human Model for Blood Pressure Estimation 113

Table 3. Correlation coefficients betweem items

SBPM DBPM SBPE DBPE BWM BWE ST HRM HRE


Systolic BP in
morning : SBPM
- 0.792 0.794 0.607 0.336 0.323 -0.016 0.067 0.329

Diastolic BP in
morning: DBPM
- 0.669 0.807 0.417 0.406 -0.044 0.235 0.212

Systolic BP in
evening: SBPE
- 0.792 0.396 0.394 -0.015 0.074 -0.255

Diastolic BP in
evening: DBPE
- 0.392 0.389 -0.042 0.206 -0.244

Body weight in
morning: BWM
- 0.999 -0.073 0.084 -0.094

Body weight in
evening: BWE
- -0.083 0.092 -0.103

Number of steps
: ST
- -0.126 0.002

Heart rate in
morning: HRM
- -0.025

Heart rate in
evening: HRE

Table 4. Information of typical subject

Initial blood
Initial age Initial body Initial BMI
ID Gender pressure
[year] weight [kg] [kg/m2]
[mmHg]
#A Male 69 61.6 22.9 126
#B Male 74 72.7 24.0 144
#C Male 42 68.8 24.1 130
#D Male 51 81.0 27.2 142
#E Male 62 71.2 25.7 124

128 62.0 128 25,000.0


収縮期血圧 (朝)
Blood pressure 収縮期血圧 (朝)
Blood pressure
126 体重
Body weight 61.5 126 歩数
Steps
20,000.0
Blood pressure[mmHg]

Blood pressure[mmHg]

124 124
61.0
Body weight[kg]

number of steps

15,000.0
122 122
60.5
120 120
10,000.0
60.0
118 118
5,000.0
116 59.5 116

114 59.0 114 0.0


2011/11/13 2012/2/11 2012/5/11 2012/8/9 2012/11/7 2013/2/5 2011/11/132012/2/112012/5/11 2012/8/9 2012/11/7 2013/2/5

Measurement date Measurement date

(a) Body weight (b) Steps


Fig. 3. A model case of blood pressure affected by the body weight and steps in Subject #A
114 T. Takeda et al.

170 74.0 170 25,000.0


収縮期血圧 (朝)
Blood pressure 収縮期血圧 (朝)
Blood pressure
165
体重
Body weight
73.0 165
歩数
Steps
160 72.0 160 20,000.0
Blood pressure[mmHg]

Blood pressure[mmHg]
155 71.0 155

Body weight[kg]

number of steps
150 70.0 150 15,000.0

145 69.0 145

140 68.0 140 10,000.0

135 67.0 135

130 66.0 130 5,000.0

125 65.0 125

120 64.0 120 0.0


2011/9/14 2012/1/12 2012/5/11 2012/9/8 2013/1/6 2011/9/14 2012/1/12 2012/5/11 2012/9/8 2013/1/6

Measurement date Measurement date

(a) Body weight (b) Steps

Fig. 4. A model case of blood pressure affected by the body weight and steps in Subject #B

145 12,000.0
収縮期血圧 (朝)
145 71.5
収縮期血圧 (朝)
Blood pressure Blood pressure
140
体重
Body weight
71.0 140 歩数
Steps 10,000.0
70.5
135 135
Blood pressure[mmHg]
Blood pressure[mmHg]

70.0
130 130 8,000.0
69.5
Body weight[kg]

number of steps
125 69.0 125
6,000.0
120 68.5 120
68.0
115 115 4,000.0
67.5
110 110
67.0 2,000.0
105 66.5 105

100 66.0 100 0.0


2010/12/18 2011/7/16 2012/2/11 2012/9/8 2013/4/6 2010/12/18 2011/7/16 2012/2/11 2012/9/8 2013/4/6
Measurement date Measurement date

(a) Body weight (b) Steps

Fig. 5. A model case of blood pressure affected by the body weight and steps in Subject #C

145 82.0 145 12,000.0


収縮期血圧 (朝)
Blood pressure 収縮期血圧 (朝)
Blood pressure
140
体重
Body weight 81.0
140
歩数
Steps 10,000.0
135 135
Blood pressure[mmHg]

Blood pressure[mmHg]

80.0
130 130 8,000.0
Body weight[kg]

number of steps

125 79.0 125


6,000.0
120 78.0 120

115 115 4,000.0


77.0
110 110
76.0 2,000.0
105 105

100 75.0 100 0.0


2009/12/3 2010/12/3 2011/12/3 2012/12/2 2013/12/2 2009/12/3 2010/12/3 2011/12/3 2012/12/2 2013/12/2

Measurement date Measurement date

(a) Body weight (b) Steps

Fig. 6. A model case of blood pressure affected by the body weight and steps in Subject #D
A Fuzzy Human Model for Blood Pressure Estimation 115

160 74.0 160 33,000.0


収縮期血圧 (朝)
Blood pressure 収縮期血圧 (朝)
Blood pressure
155
体重
Body weight 72.0
155 歩数
Steps
31,000.0

29,000.0
150 150
Blood pressure[mmHg]

Blood pressure[mmHg]
70.0 27,000.0

Body weight[kg]
145 145

number of steps
25,000.0
140 68.0 140
23,000.0
135 135
66.0 21,000.0
130 130
19,000.0
64.0
125 125 17,000.0

120 62.0 120 15,000.0


2010/12/18 2011/7/16 2012/2/11 2012/9/8 2013/4/6 2010/12/18 2011/7/16 2012/2/11 2012/9/8 2013/4/6
Measurement date Measurement date

(a) Body weight (b) Steps

Fig. 7. A model case of blood pressure affected by the body weight and steps in Subject #E
Frequency

Variation amount ΔBP(t) [mmHg]

Fig. 8. Histogram of variation amount of all subjects

3.1 Pre-processing

In this study, we employ time-series processing and biological data. Because the bio-
logical data are collected by an internet life-log service, the data have missing values.
To develop the human model, we interpolate the missing values by liner interpolation
method by Equation (2) and Figure 9.

BioRaw ( t2 ) − BioRaw ( t1 )
Bio ( t ) = BioRaw ( t1 ) × × ( t − t2 ) (2)
( t2 − t1 )

Here, the notation Bio(t) and BioRaw(t) denote interpolated biological data and raw
biological data, respectively. The notation t1 and t2 denote an immediately before
measurement time and immediately after measurement time.
116 T. Takeda et al.

Bio
Measured
BioRAW(t1) Interpolated

Bio(t)
BioRAW(t2)

time
0 t1 t t2
Fig. 9. Concept image of missing value interpolation

3.2 AR Model
Autoregressive (AR) model is one of famous time series prediction model [13,14].
The model predicts blood pressure from past blood pressure data. It is defined by
Equation (3).

{a ( i ) × ( BP ( t − i ) − BP )} + BP
p
PAR ( t ) = MA [mmHg] (3)
i =1

Here, the notation PAR(t) denotes a predicted value of time t, and the notation a =
{a(1), a(2), …, a(p)} and p denote AR parameter and order of the AR model, respec-
tively. The notation BPMA(t) and denote 14-days moving average data and mean
value of the measured blood pressure, respectively. The AR parameter a is defined by
solving the Yule-Waker equation defined by Equation (4).

 R(0) R(1)  R( p − 1)  a(1)   R(1) 


    
 R(1) R(0)  R( p − 2)  a(2)   R(2) 
= (4)
          
    
 R ( p − 1) R ( p − 2)  R(0)  a( p)   R( p) 

Here, the notation R(i) denotes auto covariance function. The order of AR parameter p
is decided on the basis of Akaike’s information criterion (AIC) [15,16]. We calculate
the AIC by Equation (5).

AIC = −2 ln ( L ) + 2k (5)

Here, L denotes maximum likelihood and k does the number of independently ad-
justed parameters within the model. Maximum likelihood L is changed by k.
A Fuzzy Human Model for Blood Pressure Estimation 117

The order of AR model p is determined by the minimum AIC during [1, t-1]. In this
paper, the AR model is developed by using a statistical analysis software R ver.
2.14.2 © 2012 the R foundation for statistical computing.

3.3 Liner Prediction Model


We assume the blood pressure has a trend. The liner prediction model predicts blood
pressure by their trend. Figure 10 shows a concept of the model. In the figure, the
model estimates the red circle from the blue learning data blood pressures. The red
straight lines show approximation straight lines made by least-squares method for the
learning data among the last 14 days. The model is defined as Equation (6).
PL ( t ) = α L × t + β L [mmHg] (6)

Here, the notation PL(t) denotes a predicted blood pressure value, and the notation αL
and βL denote a slope and intercept obtained by least-squares method, respectively.

Fig. 10. Concept image of liner prediction model

3.4 Body Weight and Steps Based Prediction Model


The other factor based prediction models predict blood pressure from the relationship
between these features and blood pressure. Figure 11 and Figure 12 shows example of
relationship between the blood pressure and body weight. In this figure, to normalize
these factors, we calculate the deviation values of each factor by Equation (7), (8)
and (9).
BP ( t ) − BP
DBP = ×10 [n.u.] (7)
σ BP
Blood pressure

Blood pressure

Body weight Body weight

(a) Distribution (b) Averaged relation


Fig. 11. Relationship between the blood pressure and the body weight
118 T. Takeda et al.

Blood pressure
Blood pressure

Number of steps Number of steps

(a) Distribution (b) Averaged relation


Fig. 12. Relationship between the blood pressure and the number of steps

BW ( t ) − BW
DBW ( t ) = ×10 [n.u.] (8)
σ BW

ST ( t ) − ST
DST ( t ) = ×10 [n.u.] (9)
σ ST

Here, the notation DBP(t), DBW(t) and DST(t) denote deviation values of the blood pres-
sure data BP(t), the body weight data BW(t) and the number of steps ST(t), respective-
ly. The notation , and denote mean value of each factor, respectively.
And the notation σBP, σBW, and σST denote standard deviation of each factor, respec-
tively. From the figures, we can see that the body weight and blood pressure are re-
lated to positive correlation, and the number of steps and blood pressure are related to
negative correlation in average. And if the deviation values of steps and body weight
are zero, then the deviation of blood pressure is also zero. From these facts, we devel-
op simply liner prediction model based on these factor as Equation (10) and (11).

σ BP
PBW ( t ) = α BW ×
σ BW
{
× BW ( t − 1) − BW + BP } [mmHg] (10)

σ BP
PST ( t ) = α ST ×
σ ST
{
× ST ( t − 1) − ST + BP } [mmHg] (11)

Here, the notation PBW(t) and PST(t) denote predicted values from body weight and
number of steps, respectively. And the notation αBW and αST denote slopes obtained by
least-squares method for body weight and number of steps, respectively.

3.5 Fuzzy Human Model


Form these prediction models, we develop a fuzzy human model for blood pressure
estimation. The fuzzy human model is aided by boosting algorithm and fuzzy logic
[17-19]. The model is a weighted average model defined as the Equation (12).
A Fuzzy Human Model for Blood Pressure Estimation 119

μ AR ( t ) PAR ( t ) + μ L ( t ) PL ( t ) + μ BW ( t ) PBW ( t ) + μ ST ( t ) PST ( t )


P (t ) = [mmHg] (12)
μ AR ( t ) + μ L ( t ) + μ BW ( t ) + μ ST ( t )

Here, the P(t) is prediction value of the fuzzy human model. The fuzzy degrees μAR(t),
μL(t), μMW(t) and μST(t) are determined on learning process. These degrees represent
reliabilities of the model. In this method, we employ three types reliability.
The first reliability is based on a mean absolute prediction error of the mod-
el M ={AR, L, BW, ST} for learning data. The error is calculated by the Equa-
tion (13).
t −1


1
eM ( t ) = BP ( i ) − PM ( i ) [mmHg] (13)
t − 1 i =1

The second reliability is based on a correlation coefficient rM(t) between measured


blood pressures and predicted blood pressures for learning data. The correlation rM(t)
is defined by the Equation (14).

( P M ( i ) − PM ) ( BP ( i ) − BP )
rM ( t ) = i =1
(14)
t t

( P ( i ) − PM )  ( BP (i ) − BP )
2 2
M
i =1 i =1

The third reliability is based on variation amount ΔBPM(t) of the model M. The var-
iation amount is defined as a difference between the predicted blood pressure and
measured blood pressure by Equation (15).

ΔBPM ( t ) = PM ( t ) − BP ( t − 1) [mmHg] (15)

We consider that the mean absolute prediction error become small when the
prediction model M is reliability (Knowledge 1). When the prediction model M pre-
dicts blood pressure with good accuracy, the correlation coefficient rM(t) becomes
high value (Knowledge 2). We assume that the variation amount ΔBP(t) obeys the
normal distribution N( , σΔBP) (Knowledge 3). From these knowledge, following
fuzzy if-then rules are derived.
Rule 1: If the mean absolute prediction error of the model M is SMALL,
then the fuzzy degree μM,E (t)of reliability is high.
Rule 2: If the correlation coefficient rM(t) of the model M is HIGH, then the fuzzy
degree μM,R (t) of reliability is high.
Rule 3: If the variation amount ΔBPM(t) of the model M is CLOSE to the mean val-
ue of learning data , then the fuzzy degree μM,V(t) of reliability is high.
120 T. Takeda et al.

Here, the fuzzy membership functions SMALL, HIGH and CLOSE are defined by
Figure 13.

degree degree
(t ) (
Se e) SrM ( t ) ( r )
1.0 M
1.0

μM,E (t) μM,R (t)


SMALL

e r
0 E 0 thR rM(t) 1.0
(a) SMALL (b) HIGH

degree
S ΔBPM ( t ) ( v )
1.0

μM,V(t)

0 v
ΔBPM(t)
(c) CLOSE
Fig. 13. Fuzzy membership functions

In the fuzzy membership function CLOSE, the notation E denotes an allowable error,
and in our experiment, it is set to 50 mmHg. The notation thR is a threshold parameter,
and it is set to 0.1. The fuzzy degrees are calculated by Equation (16), (17) and (18).

(
μ M , E ( t ) = min SMALL, Se ( t ) ( e )
M
) [degree] (16)

(
μ M , R ( t ) = min HIGH , SrM ( t ) ( r ) ) [degree] (17)
A Fuzzy Human Model for Blood Pressure Estimation 121

(
μ M ,V ( t ) = min CLOSE , SΔBPM ( t ) ( v ) ) [degree] (18)

Here, fuzzy singleton function Sa(b) is defined by Equation (19).

1 if b = a
Sa ( b ) =  [degree] (19)
0 otehrwise

From these fuzzy degrees we calculate the fuzzy degree μM (t) of reliability for the
model M by Equation (20).

μ M , E ( t ) + μ M , R ( t ) + μ M ,V ( t )
μM ( t ) = [degree] (20)
3

By using the calculated fuzzy degree, we develop the fuzzy human model defined as
Equation (12). In addition, because the fuzzy degree represents reliability, we can find
the major variation factor by compering these fuzzy degrees.

4 Experimental Results

We collected the biological data including blood pressure, body weight and number of
steps from an internet life-log service WellnessLINK. We employed and analyzed
453 subjects as shown in Table 1. In our experiment, firstly our fuzzy human model
learned by 60 days biological data. And then, the method repeated updating for the
subject. Figure 14 shows the prediction results of each time-series model. In the sub-
ject, Table 5 shows the correlation coefficient and mean absolute prediction error
between his 14-days moving average blood pressure data. From the results, we con-
sider the major variation factor of the subject was the body weight and the trend of
blood pressure related to AR model. Figure 15 shows the prediction results of the
fuzzy human model. Table 6 shows numerical evaluation results for all subjects. Form
the result, our proposed fuzzy human model improved prediction accuracy.

Table 5. Correlation coefficient and mean absolute prediction error of Figure 14

AR model Liner prediction Body weight Steps


Correlation
0.882 0.536 0.855 0.620
coefficient
Mean absolute
1.5 mmHg 4.1 mmHg 1.8 mmHg 2.5 mmHg
prediction error
122 T. Takeda et al.

160
Raw Data MA Data
155 AR model Liner prediction model
150 BW based model Steps based model
Blood pressure [mmHg]

145

140

135

130

125

120

115

110
2011/12/23 2012/2/11 2012/4/1 2012/5/21 2012/7/10 2012/8/29 2012/10/18
Measurement date

Fig. 14. An example of prediction results of each time-series model

160

155 Raw Data MA Data Fuzzy human body model

150
Blood pressure [mmHg]

145

140

135

130

125

120

115

110
2011/12/23 2012/2/11 2012/4/1 2012/5/21 2012/7/10 2012/8/29 2012/10/18
Measurement date

Fig. 15. An example of prediction results of fuzzy human body model

Table 6. Numerical eveluation results

Model For raw data For 14-days MA data


Correlation Error [mmHg] Correlation Error[mmHg]
AR model 0.411±0.165 6.5±1.7 0.791±0.092 2.6±0.9
Liner prediction 0.460±0.158 6.9±1.8 0.899±0.032 2.4±0.6
Body weigth 0.227±0.164 7.1±2.0 0.383±0.185 4.8±1.8
Steps 0.146±0.162 7.4±2.1 0.269±0.187 5.1±1.9
Fuzzy human model 0.468±0.154 6.4±1.6 0.895±0.039 2.6±0.9
A Fuzzy Human Model for Blood Pressure Estimation 123

5 Discussion

As shown in Table 6, the prediction result of the body weight and steps based predic-
tion models were worse than the AR model and the liner prediction model. The AR
model and the liner prediction models are based on the past blood pressures of the
subject. On the other hands, the body weight and steps based prediction models are
based on the other factor. Form the fact, we consider the past blood pressure always
affect the next blood pressure. However, effects of the body weight and steps depend
on the person. In this paper, we employ four prediction models. However, the blood
pressure is influenced by not only these factors. For example, the temperature, sleep,
age and eating affect our blood pressure. To improve our model and to find the varia-
tion factor, we consider that it is important to add the information.

6 Conclusion

In this study, we proposed a fuzzy human model to predict and analyze blood pres-
sure. The model consists of the AR model, the liner prediction model, the body
weight based prediction model and the number of steps based prediction model. These
models are boosted by fuzzy logic. The fuzzy degrees were calculated based on mean
absolute prediction error, correlation coefficient and the variation amount of each
model. In our experiment, we collected 453 biological data including blood pressure,
body weight and number of steps from an internet life-log service WellnessLINK.
Our proposed model predicted their morning systolic blood pressure. The obtained
correlation coefficient was 0.895±0.039 (mean ± standard deviation). Our proposed
method achieved development of the personalized human model. From the predica-
tion results, we can find variation factors of blood pressure of the person. In the fu-
ture, we will add the other variation factor to our model, i.e. temperature, sleep and
body composition information.

Acknowledgements. This research was supported in part by Japan Society for the
Promotion of Science with Grant-in-Aid for Scientific Research (A) (KAKENHI
25240038).

References
1. Ministry of Health, Labour and Welfare, http://www.mhlw.go.jp/index.shtml
2. Japan Preventive Association of Life-style related Disease,
http://www.seikatsusyukanbyo.com/
3. Nakajima, H., Hasegawa, Y., Tasaki, H., Iwami, T., Tsuchiya, N.: Health Manegement
Technology as a General Solution Framework. SICE Journal of Control, Measurement,
and System Integration, vol 1, 257–264 (2008)
4. Nakajima, H., Shiga, T., Hata, Y.: System Health Care - Health Management Technology.
In: Proc. of IEEE 43rd Int. Symp. on Multiple-Valued Logic, pp. 6–11 (2013)
124 T. Takeda et al.

5. Tanii, H., Nakajima, H., Tsuchiya, N., Kuramoto, K., Kobashi, S., Hata, Y.: A Fuzzy
Time-Series Prediction Model with Multi-biological Data for Health Management. In:
Proc. of the 6th International Conference on Soft Computing and Intelligent Systems and
13th International Symposium on Advanced Intelligent Systems, pp. 1265–1268 (2012)
6. Tanii, H., Nakajima, H., Tsuchiya, N., Kuramoto, K., Kobashi, S., Hata, Y.: A Fuzzy-AR
Model to Predict Human Body Weights. In: Proc. of 2012 IEEE World Congress on Com-
putational Intelligence, pp. 2027–2032 (2012)
7. Tamura, T., Mizukawa, I., Sekine, M., Kimura, Y.: Monitoring and Evaluation of Blood
Pressure Changes with a Home Healthcare System. IEEE Trans. on Information Technolo-
gy in Biomedical 15(4), 602–607 (2011)
8. Moreau, K.L., Degarmo, R., Langley, J., McMahon, C., Howley, E.T., Bassett, D.R.,
Thompson, D.L.: Increasing Daily Walking Lowers Blood Pressure in Postmenopausal
Women. Med. Sci. Sports Exerc. 33(11), 1825–1831 (2001)
9. Iwane, M., Arita, M., Tomimoto, S., Satani, O., Matsumoto, M., Miyashita, K., Nishio, I.:
Walking 10000 Steps/Day or More Reduces Blood Pressureand Sympathetic Nerve Activi-
ty in Mild Essential Hypertension. Hypertens. Res. 23(6), 573–580 (2000)
10. Brennan, P.J., Greenberg, G., Miall, W.E., Thompson, S.G.: Seasonal Variation in Arterial
Blood Pressure. British Medical Journal 285(6346), 919–923 (1982)
11. Alperovitch, A., Lacombe, L.M., Hanon, O., Dartigues, J.F., Ritchie, K., Ducimetiera, P.,
Tzourio, C.: Relationship Between Blood Pressure and Outdoor Temperature in a Large
Sample of Elderly Individuals. Archives of Internal Medicine 169(1), 75–80 (2009)
12. Wellness LINK, Omron Healthcare,
http://www.wellnesslink.jp/p/index.html
13. Wang, J., Zhang, T.: Degradation Prediction Method by Use of Autoregressive Algorithm.
In: Proc. of IEEE Int. Conf. on Industrial Technology 2008, pp. 1–6 (2008)
14. Gersch, W., Brotherton, T.: AR Model Prediction of Time Series with Trends and Seaso-
nalities: A contrast With Box-Jenkins Modeling. In: Decision and Control including the
Symposium on Adaptive Processes, vol. 19, p. 988 (1980)
15. Akaike, H.: A New Look at The Statistical Model Identification. IEEE Transactions 19,
716–723 (1974)
16. Shibata, R.: Selection of the order of an autoregressive model by Akaike’s information cri-
terion. Biometrika 63, 117–126 (1975)
17. Zadeh, L.A.: Fuzzy sets. Information and Control 8(3), 338–353 (1965)
18. Zadeh, L.A.: The role of fuzzy logic in the management of uncertainty in expert systems.
Fuzzy Sets and Sysems 11(1-3), 199–227 (1983)
19. Kruse, R., Gebhardt, J., Kalawonn, F.: Foundations of fuzzy systems, 1st edn. John Wiley
& Sons Ltd. (1994)
A Fuzzy Ultrasonic Imaging Method
for Healthy Seminiferous Tubules

Koki Tsukuda1, Tomomoto Ishikawa2, Seturo Imawaki2, and Yutaka Hata1,3


1
Graduate School of Engineering, University of Hyogo, Hyogo, Japan
2
Ishikawa Hospital, Hyogo, Japan
3
WPI Immunology Frontier Research Center, Osaka University, Osaka, Japan

Abstract. This paper proposes a fuzzy ultrasonic imaging method for healthy
seminiferous tubules. In our study, we employ thick or thin nylon lines as
healthy or unhealthy seminiferous tubules. We make cross-section images that
consist of multiplying fuzzy degrees depending on amplitude and frequency of
line echoes. The images are healthy or unhealthy seminiferous tubules images
(HSI or USI) that indicate distribution of healthy or unhealthy seminiferous
tubules. For a performance test, we make a measurement object consisting of
the nylon lines. For a phantom test, we make a phantom of a testicle. The
phantom consists of a water filled rubber tube including the nylon lines. We
scan and acquire ultrasonic reflection wave data of them. Next, we derive fuzzy
IF-THEN rules, and make HSI and USI. In performance test, the images
indicated distribution of the lines. In phantom test, HSI successfully extracted
thick line echoes.

Keywords: ultrasonic, seminiferous tubule, fuzzy, medical imaging.

1 Introduction

Infertile married couples are increasing, who do not have a baby for two years after
marriage in spite of not performing contraception specifically. According to the
survey of WHO, 48% of the infertile couple have problems on male side. And, 15 to
20% of infertile males are azoospermia. It is the symptom that means a complete
absence of sperms in ejaculated semen. And it is classified into two problems; one is a
production problem that is called non-obstructive azoospermia (NOA), and the other
is a delivery problem that is called obstructive azoospermia (OA). In the case of OA
patients, they are cured by surgery that removes a clog. On the other hand, in the case
of NOA patients, there was no ultimate cure for them. However the development of
IntraCytoplasmic Sperm Injection (ICSI) opened a new era in the field of assisted
reproduction in 90's. A sperm in seminiferous tubules extracted from NOA patients
became able to be used for micro fertilization [1], [2].
To avoid destruction of testicular function after surgery without compromising
recovery rate of sperms, the ideal sperm extraction from testicles of NOA patients
should be minimally invasive. Schlegel et al. developed the technique with the
assistance by an operating microscope; it is called Micro-TESE (microdissection

Y.S. Kim et al. (eds.), Advanced Intelligent Systems, 125


Advances in Intelligent Systems and Computing 268,
DOI: 10.1007/978-3-319-05500-8_12, © Springer International Publishing Switzerland 2014
126 K. Tsukuda et al.

testicular sperm extraction) [3]. The possibility that thick seminiferous tubules (250 to
300μm in diameter) include the sperm is higher than that of thin seminiferous tubules
(150μm in diameter). Some azoospermia patients have both seminiferous tubules.
However there are the patients who do not have thicker seminiferous tubules than thin
seminiferous tubules. Sperms cannot be collected from the testicle of these patients
[4], [5]. Before the surgery, there are no ways to judge whether they have sperms.
Additionally, the surgery by micro-TESE is hard work and high costs, moreover the
patient will have physical and economical burden. Therefore, we need a system that is
able to detect the thick seminiferous tubules in the testicle noninvasively before the
surgery [6], [7].
There are devices that can measure noninvasively to human. An ultrasonic device
is one of them. Ultrasonic techniques have been proposed to measure the inside of
human body noninvasively [8], [9]. These measurements usually employ Pulse-Echo
method [10]. The method calculates an object depth by detecting a received wave
time. We are able to calculate the object thickness by received wave time difference
of surface and bottom of the object. Therefore, spatial resolution of the method is
determined by a wavelength of ultrasonic and a number of waves. Generally, we have
to select 60MHz ultrasonic probe to get enough spatial resolution for measuring thin
seminiferous tubules by the method. However, higher frequency wave has higher
attenuation, i.e. lower osmosis. Considering osmosis to human body, we should select
the ultrasonic probe whose transmission frequency is lower than 3.5MHz. From the
above, we need a method that is able to measure very small things with keeping
osmosis ability.
In previous studies, our group had proposed a characteristic between ultrasonic
reflection wave frequency and line thickness. In the report, our group had indicated
that a frequency of a thick line is lower than it of a thin line. By using the
characteristic, our group measured thickness of thinner lines than spatial resolution
[11]. Moreover, our group had proposed ultrasonic imaging for seminiferous tubules
[12]. In this report, images by the imaging did not have original spatial resolution
because of applying short-time Fourier transform (STFT) and making the images
directly from STFT data. In contrast, this paper proposes an ultrasonic imaging
method keeping original spatial resolution for seminiferous tubules of a testicle on
assuming real inspections. By using the characteristic, we consider knowledge of
tubules, and calculate fuzzy degrees of tubules. With keeping original spatial
resolution, we make a cross-section image that indicates distribution of tubules. In our
experiment, we employ two kind nylon fishing lines with different diameter as
samples of seminiferous tubules. Firstly, we acquire ultrasonic reflection wave data of
the lines by linear scan. Secondly, we apply STFT to the acquired data to calculate
peak frequencies. Thirdly, we calculate fuzzy degrees from peak frequencies and
amplitude of acquired ultrasonic data. Finally, we make fuzzy images keeping
original spatial resolution by multiplying fuzzy degrees. The images are healthy
seminiferous tubules image (HSI) and unhealthy seminiferous tubules image (USI).
HSI or USI show distribution of healthy or unhealthy tubules.
A Fuzzy Ultrasonic Imaging Method for Healthy Seminiferous Tubules 127

In a performance test, HSI and USI indicated the line distribution. In a phantom
test, HSI and USI successfully indicated the echoes of the lines. These results showed
high possibility to be able to know an existence of healthy seminiferous tubules in a
testicle on real inspections.

2 Preliminaries

2.1 A Measurement Object for Performance Test


We employ two kind nylon fishing lines with different diameter as seminiferous
tubules, and the lines are shown in Fig. 1. The diameter of the thin line (Line A) is
90μm and that of the thick line (Line B) is 285μm. We employ the Line A as the
sample of the unhealthy seminiferous tubule whose diameter is smaller than 150μm.
And we employ the Line B as the sample of the healthy seminiferous tubule whose
diameter ranges from 250 to 300μm. In our experiment, we make the measurement
object for performance test of our imaging method. The object is shown in Fig. 2. It
consists of the plastic box and nylon fishing lines as shown in Fig. 2(a). The
illustration of line distribution is shown in Fig. 2(b). Using this object, we operate
performance test of our imaging method.

Line A (90μm)

Line B (285μm)

Fig. 1. Two kind nylon fishing lines with difference diameter

Line A (90μm)
Line B (285μm)

Scanning line 10mm

5mm

20mm
Box (side view)

(a) Top view (b) Line distribution

Fig. 2. The measurement object for performance test


128 K. Tsukuda et al.

2.2 A Phantom of a Testicle

We employ a phantom that consists of nylon fishing lines and a rubber tube as a
sample of a testicle. The phantom is shown in Fig. 3. It is made of twenty four thin
lines of 30cm (30cm × 24), three short thick lines and the rubber tube with 15cc of
water. We make a lump of thin lines like wool by rubbing hands and insert thick lines
in the lump. Finally we put the lump in the rubber tube with water. In our experiment,
using the ultrasonic device and scanner, we scan linearly and acquire ultrasonic
reflection wave data of the phantom. Finally we make a cross-section image (a
B-mode image) of the phantom by our method.

Rubber tube

Scanning line
Line A

Line B

(a) Top view (b) Component

Fig. 3. The phantom of a testicle

2.3 An Ultrasonic Data Acquisition System


In our experiment, we use the ultrasonic single probe shown in Fig. 4. The center
frequency of the probe is 5.0MHz. Our ultrasonic data acquisition system is shown in
Fig. 5. Sampling interval of data acquisition is 4ns. The ultrasonic wave data are
provided to a personal computer through an A/D converter (Pico Technology, Pico
Scope 4227). These data are provided to the personal computer as 8 bits intensity
data. In our experiment, we acquire ultrasonic reflection wave data from an object by
using the ultrasonic single probe. In this system, the probe is fixed on scanner for
linear scan. Scan interval is 1.0mm. We make a B-mode image of an object by
collecting scanning data.

Fig. 4. The ultrasonic single probe


A Fuzzy Ultrasonic Imaging Method for Healthy Seminiferous Tubules 129

PC
Scanner
Ultrasonic Single Probe

A/D Converter

Object

Water Tank
Pulsar Receiver

Fig. 5. The ultrasonic data acquisition system

3 Principle

In previous study, our group had proposed the relationship between frequency of
ultrasonic reflection wave and line thickness as below [11]. A frequency of a vibrating
line is determined by (1).

1 T
f = (1)
2l σ
Here, the notation f denotes a frequency, l denotes a length of a line, T denotes tension
of a line, and σ denotes linear density that means the mass per unit length. We assume
that l and T are constant. Thus, the frequency is inversely proportional to the square
root of linear density as shown in (2).
1
f ∝ (2)
σ
The density σ is represented with a diameter of a line as shown in (3).
π
σ= ϕ2M (3)
4
Here, the notation M denotes density of a line, and φ denotes a diameter of a line. M
means the mass per unit volume, and it can be assumed as constant because we employ
a nylon fishing line in our experiment. Thus, the relationship (4) can be derived by (3).
σ ∝ ϕ2 (4)
Therefore a frequency of a vibrating line is inversely proportional to the diameter of a
line.
1
f ∝ (5)
ϕ
From this relationship, we calculate the frequency of Line A and Line B.
130 K. Tsukuda et al.

Fig. 6 shows frequency spectra of nylon fishing lines with different diameter. We
calculate the peak frequency of all nylon fishing lines. The characteristic between f and
1/φ is shown as Fig 7. In Fig. 7, there is the relationship shown by (5). The linear
approximate equation is shown by (6).
1
f = 415.4 + 0.2976 (6)
ϕ

25
90μm 117μm 155μm
20 205μm 235μm 260μm
285μm
PSD [dB]

15

10

0
0.5 1.5 2.5 3.5 4.5 5.5
Frequency [MHz]

Fig. 6. Frequency spectra of nylon fishing lines with different diameter

y = 415.43x + 0.2976
f [MHz]

0
0.000 0.002 0.004 0.006 0.008 0.010 0.012
1/φ [μm-1]

Fig. 7. The characteristic of f and 1/φ

4 Proposed Method

As Fig. 8 shows, our method consists of four steps. Firstly we apply STFT to acquired
ultrasonic data. Secondly, we calculate peak frequencies from each STFT data.
Thirdly, we calculate fuzzy degrees from peak frequencies and amplitude of acquired
ultrasonic data. Finally, we make fuzzy images by multiplying fuzzy degrees. The
images are HSI and USI. HSI and USI indicate distribution of healthy and unhealthy
tubules.
A Fuzzy Ultrasonic Imaging Method for Healthy Seminiferous Tubules 131

START

Ultrasonic
Applying Short-time Fourier transform
data

Calculation of peak frequencies

Calculation of Fuzzy degrees

Making Fuzzy images

END

Fig. 8. The procedure of our method

4.1 Applying STFT


By the system shown in Fig. 5, we scan and acquire ultrasonic reflection wave data.
To get frequency data including time axis, we apply STFT to acquired ultrasonic data
by (7)-(9).
L −1 2π ft
−j
STFTs (τ , k ) =  xs ( t + m ) win ( m ) e L
(7)
m=0

L
t= + τ LO (τ = 0,1, , n ) (8)
2
Fsp
f = k ( k = 0,1, , L − 1) (9)
L
Here, the notation L denotes a window length that is power of two, s does position of
scanning line, t does time, k does an index along the frequency axis, τ does an index
along the time axis, STFTs(τ, k) does STFT data, xs(t) does the acquired data, win(m)
does the hanning window that has a peak at L/2, j does the imaginary unit, f does
frequency, LO does overlap length of STFT, n does nonnegative integer, and Fsp does
a sampling frequency.

4.2 The Fuzzy Ultrasonic Imaging for Seminiferous Tubules


In this section, we mention the method of making fuzzy image. Firstly, we calculate
peak frequencies from each STFT data. The concept figure of this calculation is
shown in Fig. 9. Peak frequencies are calculated from one STFT data. To get
significant peaks of PSD, we set threshold Thp. We employ these significant peaks for
the calculation of fuzzy degrees.
Next, we calculate fuzzy degrees from peak frequencies and amplitude of acquired
ultrasonic data. To calculate fuzzy degrees, we consider three knowledge.
132 K. Tsukuda et al.

Knowledge 1: The tubule echo has slightly higher amplitude than other echoes.
Knowledge 2: The frequency of tubule echo has difference depending on its
diameter.
Knowledge 3: The frequency of noise is too higher or lower compared to that of
significant echoes.
From these knowledge, three fuzzy IF-THEN rules are derived.
Rule 1: IF the amplitude of the echo is high, THEN μa is high.
Rule 2: IF the peak frequency of the echo is close to fh, THEN μh is high.
Rule 3: IF the peak frequency of the echo is close to fuh, THEN μuh is high.
Here, the notation μa denotes fuzzy degree of tubules, fh does the frequency of healthy
tubules, μh does fuzzy degree of healthy tubules, fuh does the frequency of unhealthy
tubules, μuh does fuzzy degree of unhealthy tubules. From these rules, three fuzzy
membership functions are defined as shown in Fig. 10. In Fig. 10(a), the notation Th
denotes the threshold when fuzzy degree is max, Sa(Amplitude) does the fuzzy
singleton function by (10). Therefore μa is calculated by (11). In Fig. 10(b), the
notation f1 and f2 denote frequencies of healthy (thick) seminiferous tubules (300 to
250μm in diameter), fh does the frequency when fuzzy degree is max, Spf(PF) does the
fuzzy singleton function like (10). Therefore μh is calculated by (12). In Fig. 10(c), the
notation f3 and f4 denote frequencies of unhealthy (thin) seminiferous tubules (150 to
90μm in diameter), fuh does the frequency when fuzzy degree is max. Therefore μuh is
calculated by (13). At the time of the calculation of μh and μuh, we use all peak
frequencies, and employ the highest value of each fuzzy degree as μh or μuh. From
these degrees, we make HSI and USI by (14) and (15).
1 if Amplitude = a
Sa ( Amplitude ) =  (10)
0 otherwise
μa = min ( S a ( Amplitude ) , HIGH ) (11)

μh = min ( S pf ( PF ) , CLOSE1 ) (12)

μuh = min ( S pf ( PF ) , CLOSE2 ) (13)

HSI ( s, t ) = μ a ( s, t ) × μ h ( s, t ) (14)

USI ( s, t ) = μa ( s, t ) × μuh ( s, t ) (15)

s PSD

1 2 … N Thp

t k
PF1 PF2
Acquired data
STFT data
Fig. 9. The concept figure of calculation of peak frequencies
A Fuzzy Ultrasonic Imaging Method for Healthy Seminiferous Tubules 133

Degree Degree
Sa(Amplitude) HIGH CLOSE1 Spf(PF)
1 1

μa
α
μh
Amplitude [V] 0 PF [MHz]
0 a Th f1 fh f2 pf

(a) HIGH (b) CLOSE1


Degree
CLOSE2 Spf(PF)
1

β
μuh
0 PF [MHz]
f3 fuh f4 pf

(c) CLOSE2

Fig. 10. Fuzzy membership functions for fuzzy imaging

5 Experimental Results

5.1 Performance Test


We acquired reflection wave data on the scanning line of the measurement object
shown in Fig. 2. The B-mode image of the acquired data is shown in Fig. 11. In Fig.
11, the echoes of all lines are indicated. Next, we applied STFT to the acquired data.
The window length L was 512, and the overlap length LO was 26. To get significant
peaks and calculate three fuzzy degrees, we employed Thp=0.6, Th=0.3, α=0.5, β=0.5,
f1=1.68, f2=1.96, fh=f1+(f2-f1)/2, f3=3.07, f4=8.61 and fuh=f3+(f4-f3)/2 as experimental
parameters so that line echoes are clearly extracted. Moreover the parameters f1, f2, f3,
and f4 were calculated by (6). At the time, φ was 300, 250, 150 and 50, respectively.
The fuzzy degrees μa, μh and μuh of the measurement object are shown in Fig. 12. In
Fig. 12(a), weak line echoes of lower stage were extracted clearly against the B-mode
image. In Fig. 12(b) and Fig. 12(c), each fuzzy degree was seemed to show the
distribution of each nylon line. We made HSI and USI of the measurement object by
(14) and (15). These images are shown in Fig. 13. In Fig. 13(a), the echoes of Line A
were extracted. In Fig. 13(b), the echoes of Line B were slightly extracted.
Next, we calculated depth of line echoes. The results are shown in Table 1. The
calculated depth is the average depth of the lines echoes on each stage. In Table 1,
errors of depth in lower stage were largest at both lines. We can say its reasons are
wavelength limit and attenuation. In our experiment, we employed 5.0MHz probe
with considering osmosis to human body. Because of this, the echoes were acquired
without enough spatial resolution. Furthermore, the echoes of lower stage were
134 K. Tsukuda et al.

attenuated because the echoes were acquired through lines of upper stage. This
attenuation was also recognized as echo intensity difference of each stage in Fig. 11.
Echo intensities were gradually weak from upper stage to lower stage.

Amplitude
0

Fig. 11. B-mode image of the measurement object

Degree
0
(a) μa (b) μh (c) μuh
Fig. 12. Fuzzy degrees of the measurement object

1
Degree

0
(a) HSI (b) USI
Fig. 13. Fuzzy images of the measurement object

Table 1. Calculation results for depths of Line A and Line B

Line A Line B
Stage of the lines Upper Middle Lower Upper Middle Lower
Calculated depth [mm] 26.5 39.6 52.0 24.1 38.0 53.0
True depth [mm] 27.0 37.0 47.0 27.0 37.0 47.0
Error of depth [mm] 0.5 2.6 5.0 2.9 1.0 6.0

5.2 Phantom Test


We acquired reflection wave data on the scanning line of the phantom shown in Fig.
2. The B-mode image of the acquired data is shown in Fig. 14. In Fig. 14, the echoes
A Fuzzy Ultrasonic Imaging Method for Healthy Seminiferous Tubules 135

of all lines are slightly indicated. In the same way, we calculated three fuzzy degrees.
The degrees μa, μh and μuh of the phantom are shown in Fig. 15. In Fig. 15(a), weak
echoes of lines were extracted clearly against the B-mode image. In Fig. 15(b) and
Fig. 15(c), lower degree areas of μuh ware seemed to exist at higher degree areas of μh.
We made HSI and USI of the phantom by (14) and (15). These images are shown in
Fig. 16. In Fig. 16(a), three echoes of Line B were clearly extracted as distribution
shown in Fig 3. The echoes were marked by white arrows. In Fig. 16(b), the echoes
except Line B were extracted.

Amplitude
0

Fig. 14. B-mode image of the phantom

Degree
0
(a) μa (b) μh (c) μuh
Fig. 15. Fuzzy degrees of the phantom

1
Degree

0
(a) HSI (b) USI
Fig. 16. Fuzzy images of the phantom

6 Conclusion

In this paper, we have proposed the fuzzy ultrasonic imaging method for healthy
seminiferous tubules. We employed two kind nylon fishing lines with different
diameter as healthy or unhealthy seminiferous tubules. In the performance test, our
imaging method correctly indicated the line distribution. In the phantom test, our
136 K. Tsukuda et al.

imaging method successfully extracted the echoes of the lines with keeping spatial
resolution. From the above, our imaging method suggested high possibility to be able
to know an existence of healthy seminiferous tubules on real inspections.
In the future, we will improve accuracy of our method, and do experiments with a
human testicle.

Acknowledgement. This work was supported in part by Japan Society for the
Promotion of Science with Grant-in-Aid for Challenging Exploratory Research
(KAKENHI 25670689).

References
[1] Hochschild, F.Z., et al.: The International Committee for Monitoring Assisted
Reproductive Technology (ICMART) and the World Health Organization (WHO) revised
glossary on ART terminology. Human Reproduction 24(11), 2683–2687 (2009)
[2] Palermo, G., Joris, H., Devroey, P., Van Steirteghem, A.C.: Pregnancies after
intracytoplasmic injection of single spermatozoon into an oocyte. Lancet 340, 17–18
(1992)
[3] Schlegel, P.N., Li, P.S.: Microdissection TESE: spermatozoa retrieval in non-obstructive
azoospermia. Hum. Reprod. Update 4, 439 (1998)
[4] Ishikawa, T., Nose, R., Yamaguchi, K., Chiba, K., Fujisawa, M.: Learning curves of
microdissection testicular sperm extraction for non-obstructive azoospermia, pp. 1008–
1011. Fertil Steril in Press (August 2010)
[5] Ishikawa, T., Fujisawa, M.: Microdissection testicular sperm extraction for non-
obstructive azoospermia: The assessment of serum hormone levels before and after
procedure. Japanese Journal of Reproductive Endocrinology 14, 15–20 (2009)
[6] Ramkumar, A., Lal, A., Paduch, D.A., Schlegel, P.N.: AN Ultrasonically Actuated
Silicon-Microprobe-Based Testicular Tubule Assay. IEEE Transactions on Biomedical
Engineering 56, 2666–2674 (2009)
[7] Giffin, J.L., Franks, S.E., Rodariguez-Sosa, J.R., Hahnel, A., Bartlewski, P.M.: A Study
of Morphological and Haemodynamic Determinants of Testicular Echotexture
Characteristics in the Ram. Exp. Biol. Med., 794–801 (July 2009)
[8] Nakamura, M., Kitamura, Y.T., Yanagida, T., Kobashi, S., Kuramoto, K., Hata, Y.: Free
placement trans-skull doppler system with 1.0MHz array ultrasonic probe. In: Proc. of
2010 IEEE Int. Conf. on Systems, Man and Cybernetics, pp. 1370–1374 (2010)
[9] Yagi, N., Oshiro, Y., Ishikawa, O., Hata, Y.: Trans-skull brain imaging by image
registration of 0.5 and 1.0 MHz waves. In: Proc. of 2011 IEEE Int. Conf. on Systems,
Man, and Cybernetics, pp. 706–710 (2011)
[10] Kräutkramer, J., Kräutkramer, H.: Ultrasonic Testing of Materials, 4th edn. (1990)
[11] Takashima, Y., Ishikawa, T., Kobashi, S., Kuramoto, K., Hata, Y.: Ultrasonic evaluation
of seminiferous tubules by frequency map. In: Proc. of 2012 Fifth Int. Conf. on Emerging
Trends in Engineering and Technology, pp. 7–12 (2012)
[12] Tsukuda, K., Ishikawa, T., Hata, Y.: An Ultrasonic Imaging for Seminiferous Tubules
beyond The Wavelength Limit. In: Proc. of 2013 IEEE Int. Conf. on Systems, Man and
Cybernetics (2013) (in press)
Ultrasonic Mobile Smart Technology for Healthcare

Naomi Yagi1, Tomomoto Ishikawa2, Setsurou Imawaki2, and Yutaka Hata1,3


1
Graduate School of Engineering, University of Hyogo, Hyogo, Japan
2
Ishikawa Hospital, Hyogo, Japan
3
WPI Immunology Frontier Research Center, Osaka University, Osaka, Japan

Abstract. This paper describes mobile health care managements in smart


medical system. The transformation of electricity grids into smart grids has
been widely remarked as a key for sustainable growth around the globe.
The trend to smart grids comes at a time in which information and
communication technologies have revolutionized personal communications and
turned wireless communications into a commodity. Thus, it is no coincidence
that communications technology will play an essential role in the
implementation of smart grids. This study designs the mobile medical system to
review data prior to patient access. Improved communication can also ease the
process for patients, clinicians, and care-givers. As one of the implementations
for smart medical system, the ultrasonic diagnosis and mobile communication
system are proposed.

Keywords: smart medical system, communication approach, mobile health


care, emergency medicine, smart grid.

1 Introduction

The more widely used electronic medical record (EHR) operated by the clinicians or
the health care providers. It contrasts well with a personal health record (PHR). This is
a health record which the patient maintains health data and information for the care [1].
The intention of a PHR is to provide a summary of an individual medical history which
is accessible online. The health data on a PHR include patient-reported outcome data
from devices collected passively from a smartphone such as wireless electronic
weighing scales. The patients may enter PHR directly, either by typing into fields or
uploading/transmitting data from a file or another website. In recent years, several
formal definitions of the term have been proposed by various organizations [2].
Mobile health care managements grant patients access to health information, best
medical practices, and health knowledge. Moreover, it helps clinicians to make better
treatment decisions by providing more continuous data and to identify health threats
and improvement opportunities based on drug information or current medical
practices and care plans. The medical support system also makes it easier for
clinicians to care for their patients by facilitating continuous communication.
Eliminating communication barriers and allowing documentation flow between
patients and clinicians can save time consumed by face-to-face meetings and

Y.S. Kim et al. (eds.), Advanced Intelligent Systems, 137


Advances in Intelligent Systems and Computing 268,
DOI: 10.1007/978-3-319-05500-8_13, © Springer International Publishing Switzerland 2014
138 N. Yagi et al.

telephone communication. Improved communication can also ease the process for
patients, clinicians, and caregivers to ask questions, to set up appointments, and to
report problems. Additionally, the smart medical system can quickly provide critical
information to proper diagnosis or treatment in the case of an emergency. The
software development and implementation is the ease with which customizations may
implement and refine some advantages. For example, the features of the care
notebook component were initially developed for health specific requirements. By
expanding the types of data, this tool supports for specific needs and care plans,
overlapping data points can help best practices across implementations.
The final goal for this research is to contract the smart medical system by using
ultrasonic device and mobile communication technology. A medical diagnosis system
using ultrasonic device is widely used in the medicine [3]-[4]. Recently, a simple and
unrestrained system without the large mechanical scanner is strongly required. We
had proposed the transcranial brain imaging system by using ultrasonic array probe
[5]-[11]. This study extended our works in them and suggested mobile health care
system with smart phone application. Moreover, this system proposed that it is
important for physicians and patients to use mobile technology in order to assist with
clinical decision-making.

2 Preliminaries
The ultrasonic diagnostic target is human brain in this experiment. As shown in Fig.1,
human skull and cerebral sulcus are imitated as the cow scapula and the steel sulcus,
respectively. Skull means bony structure and part of the skeleton. Sulcus is a term that
is used to describe a depression, which is, in particular, on the surface of a brain. The
ultrasonic data acquisition system is shown in Fig. 2. The cow scapula and the steel
sulcus are placed in a thermostat water bath (Thomas Kagaku Co. Ltd., T-22L) by
which the water temperature is adjusted to 20°C. The distance between the array
probe and the cow scapula is about 25 mm. The distance between the cow scapula and
the steel sulcus is about 10 mm. The ultrasonic phased array (Eishin Kagaku Co. Ltd.,
MC-64) transmits/receives ultrasonic waves via the array probe. The sampling
interval is 0.5ms. At once, we can obtain 32 ultrasonic waves from the random
position by manual scanning.

Fig. 1. Ultrasonic Diagnostic Target (Human Brain)


Ultrasonic Mobile Smart Technology for Healthcare 139

Fig. 2. Ultrasonic Data Acquisition System

2.1 Ultrasonic Array Probe


We employ 0.5 MHz array probe (ISL Inc., ISL2022) and 1.0 MHz array probe (ISL
Inc., ISL1938) as shown in Fig. 3 (a) and (b). The center frequency of ISL2022 is 0.5
MHz and that of ISL1938 is 1.0 MHz.

(a) 0.5 MHz (ILS2022) (b) 1.0 MHz (ILS1938)

Fig. 3. Ultrasonic Array Probe

1 2 •3 •• • • • 31 32 1 2 3 • • • • • • • 31
Applied Applied
Voltage Voltage

Ultrasound Ultrasound
Combined Combined
Ultrasound Ultrasound

(a) Combined ultrasound (b) Electronic control shift

Fig. 4. Array Probe System


140 N. Yagi et al.

Fig. 4 shows the system of these array probes. Each array probe consists of 32
elements at intervals of 1.5 mm. The voltage is applied to the element and the
ultrasound is generated from the element. The applied voltage shifts by one element
and the ultrasound is generated. The array probe can obtain 32 ultrasonic waveforms
inline.

2.2 Experimental Materials


In this study, we employ a cow scapula as a skull and a steel sulcus as a cerebral
sulcus. Fig. 6 shows the cow scapula. The thickness of ‘A’ is 2.64 mm and the
thickness of ‘B’ is 11.18 mm. The width is 110.0 mm. In this experiment, we employ
the part of the cow scapula about 2.6 mm because the average thickness of human
skull is about 3.0 mm. As a cerebral sulcus, we employ a steel sulcus as shown in Fig.
7. Table I shows the specification of the steel sulcus.

Fig. 5. Cow Scapula

1 2 3 4 5

Fig. 6. Steel Sulcus

Table 1. Specification of Steel Sulcus

Sulcus1 Sulcus2 Sulcus3 Sulcus4 Sulcus5


Width[mm] 51.96 34.64 24.25 17.32 10.39
Depth [mm] 15.00 10.00 7.00 5.00 3.00

2.3 Ultrasonic Image


Fig. 7 (a) and (b) are the ultrasonic B-mode images by using 1.0 MHz and 0.5 MHz
ultrasonic array probes.
Ultrasonic Mobile Smart Technology for Healthcare 141

(a) 0.5 MHz (b) 1.0 MHz


Fig. 7. Ultrasonic B-mode Image

3 System Implementation

The linking of patient information with medical systems makes it possible to increase
the efficiency of health care innovation. The smart care needs simple to operate on
both desktop and mobile. With integrated telecommunication, voice recording, and
direct dictation into the record, clinical governance is needed. The decision support
leads the self-designed clinical way by inputting screens. We applied the ultrasonic
smart medical system by using iPhone/iPad as shown in Fig. 8. After processing for
analyzing ultrasonic images, the detail data will be sent to the mobile items.

Fig. 8. Ultrasonic Smart Medical System

The smart medical care is a comprehensive iPhone/iPad application for clinicians,


social care workers, and family to record their interactions with patients and clients.
We developed our application as shown in Fig. 9.
142 N. Yagi et al.

Fig. 9. Start Screen of Application

Fig. 10 (a) and (b) are the screens ‘Start screen’. Fig. 10 (a) shows the ultrasonic
image with the comments after the diagnosis. The received image will be stocked in the
database. When pushing the button ‘Share’ in this screen, the users can post the message
on Facebook and share the information with their family as shown in Fig. 10 (b).

(a) Ultrasonic Image (b) Post on Facebook Screen


Fig. 10. Ultrasonic Image Screens
Ultrasonic Mobile Smart Technology for Healthcare 143

Fig. 11 (a) and (b) are the screens ‘Family’. The users can manage the members
who share the information as shown in Fig. 11 (a). When pushing the name bar, they
can browse the detail for the selected members in Fig. 11 (b).

(a) Family Lists (b) Family Details


Fig. 11. Family List Screens

(a) Schedule Lists (b) Venue Details


Fig. 12. Schedule Screens
144 N. Yagi et al.

Fig. 12 (a) and (b) are the screens ‘Schedule’. The innovative smart medical care
application connects to the patient management systems to be scheduled and care
requirements distributed to appropriate resources. The access is gained with a secure
login of username and password, where users can access their schedule, view family’
details with previous relevant history, record notes and clinical observations, and
schedule further appointments as shown in Fig. 12 (a). When pushing the date bar, the
users can browse the venue details as shown in Fig. 12 (b).
Fig. 13 (a) and (b) are the screens ‘Gallery’. The users can browse the ultrasonic
images which the members had already been diagnosed as shown in Fig. 13 (a). When
pushing the image, they can browse the full screen image in Fig. 11 (b).

(a) Schedule Lists (b) Venue Details


Fig. 13. Gallery Screens

4 Discussion

We applied the ultrasonic smart medical system by using iPhone/iPad. This system
has some merits below.
Patients Merits:
1. Improvement of the medical quality by sharing the medical treatment information
2. Easy understanding for medical treatment contents
3. Disclosure of the medical treatment information
4. Waiting time shortening at the hospital
5. Cooperation by the electronic data with other medical institutions
Ultrasonic Mobile Smart Technology for Healthcare 145

Doctors/Nurses Merits:
1. Realization of a "readable" medical record
2. Support to informed consent
3. Immediate Reference for the medical record (information and test result)
4. Improvement of the medical quality by using the apprications
5. Labor saving with electric communication technology

In addition to storing individual personal health information, some PHRs provide


added-value services such as drug-drug interaction checking, electronic messaging
between patients and providers, managing appointments, and reminders. However, it
is by no means obvious which communication technologies will be integrated into
electricity grids. Communication systems need to be seen as part of systems,
including in particular health information processing systems. Therefore, this study is
useful for mobile health care system.

5 Conclusion

The mobile smartphones and downloadable applications have become commonplace


in the medical field as personal and professional tool. The medically related apps
suggest that physicians use mobile technology to assist with clinical decision-making.
Physicians are quickly integrating the Smartphone apps, such as those available in
Apple and Android, into clinical practice. Smartphone apps are self-contained
software applications that can be downloaded by the advanced mobile phones. The
appeal points of apps for the users are their ability to store reference information, save
critical data, perform complex calculations, and access internet-based content.
The clinical use of smartphones and apps will continue to increase, and there is an
absence of high-quality and popular apps despite a desire among physicians and
patients. This information should be used to guide the development of future health
care delivery systems. Moreover, the reliability and ease of use will remain major
factors in determining the successful integration of apps into clinical practice.
This paper proposed mobile health care managements in smart medical system. We
used the ultrasonic images by using two ultrasonic array probes with the each center
frequency of 1.0 MHz and 0.5 MHz. We performed the experiment using a cow
scapula as a skull and a steel sulcus as a cerebral sulcus and implementation with
iPhone and iPad. As the results, we developed the total system with mobile phone
application for medical ultrasonic system. It will be meaningful for ultrasound-
mediated diagnosis in emergency medicine and health care in the near future.

Acknowledgment. This work was supported in part by research grant from Japan
Power Academy.
146 N. Yagi et al.

References
[1] Agarwal, R., Angst, C.M.: Technology-enabled transformations in U.S. health care: early
findings on personal health records and individual use. Human-Computer Interaction and
Management Information Systems: Applications 5, 357–378 (2006)
[2] American Health Information Management Association. The Role of the Personal Health
Record in the HER (July 25, 2005)
[3] Wear, K.A.: Autocorrelation and Cepstral Methods for Measurement of Tibial Cortical
Thickness. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency
Control 50(5), 655–660 (2003)
[4] Krautkramer, J., Krautkramer, H. (eds.): Ultrasonic Testing of Materials. Springer, Berlin
(1990)
[5] Ikeda, Y., Kobashi, S., Kondo, K., Hata, Y.: Fuzzy Ultrasonic Array System for Locating
Screw Holes of Intramedullary Nail. In: Proc. 2006 IEEE International Conference on
Systems, Man, and Cybernetics, pp. 3428–3432 (2007)
[6] Hiramatsu, G., Ikeda, Y., Imawaki, S., Kitamura, Y.T., Yanagida, T., Kobashi, S., Hata,
Y.: Trans-skull Imaging System by Ultrasonic Array Probe. In: Proc. of 2009 IEEE Int.
Conf. on Systems, Man and Cybernetics, pp. 1116–1121 (2009)
[7] Yagi, N., Oshiro, Y., Ishikawa, O., Hata, Y., Kitamura, Y.T., Yanagida, T.: YURAGI:
analysis for trans-skull brain visualizing by ultrasonic array probe. In: Proc. of SPIE
Defence, Security and Sensing 2011, pp. 805813-1-9 (2011)
[8] Yagi, N., Oshiro, Y., Ishikawa, O., Hiramatsu, G., Hata, Y., Kitamura, Y., Yanagida, T.:
Data synthesis for trans-skull brain imaging by 0.5 and 1.0MHz ultrasonic array systems.
In: Proc. of 2010 IEEE Int. Conf. on Systems, Man and Cybernetics, pp. 1524–1529
(2010)
[9] Hiramatsu, G., Kobashi, S., Hata, Y., Imawaki, S.: Ultrasonic Large Intestine Thickness
Determination System for Low Anterior Resection. In: Proc. 2008 IEEE International
Conference on Systems, Man, and Cybernetics, pp. 3072–3076 (2008)
[10] Yagi, N., Oshiro, Y., Ishikawa, T., Hata, Y.: Ultrasonic image synthesis in fourier
transform. In: Proc. of 2012 World Automation Cong. (2012)
[11] Yagi, N., Oshiro, Y., Ishikawa, T., Hata, Y.: Human brain ultrasound-mediated diagnosis
in emergency medicine and home health care. In: Proc. of the 6th International Conference
on Soft Computing and Intelligent Systems and 13th International Symposium on
Advanced Intelligent Systems, pp. 1269–1274 (2012)
Pseudo-normal Image Synthesis from Chest
Radiograph Database for Lung Nodule Detection

Yuriko Tsunoda1 , Masayuki Moribe1 , Hideaki Orii1 ,


Hideaki Kawano2, and Hiroshi Maeda2
1
Department of Electrical Engineering and Electronics, Graduate School of
Engineering, Kyushu Institute of Technology, Japan
n349421y@tobata.isc.kyutech.ac.jp
2
Department of Electrical Engineering and Electronics, Faculty of Engineering,
Kyushu Institute of Technology, Japan
kawano@ecs.kyutech.ac.jp

Abstract. The purpose of this study is to develop a new computer aided


diagnosis (CAD) system for a plain chest radiograph. It is difficult to dis-
tinguish lung nodules from a chest radiograph. Therefore, CAD systems
enhancing the lung nodules have been actively studied. The most no-
table achievements are temporal subtraction (TS) based systems. The
TS method can suppress false alarms comparatively because it uses the
chest radiograph of the same person. However, the TS method cannot be
applied to initial visitors because it requires the past chest radiograph
of themselves. In this study, to overcome the absence of past image for
a patient himself, a pseudo-normal image is synthesized from a database
containing other patient’s chest radiographs that have already been di-
agnosed as normal by medical specialists. And then, the lung nodules
are emphasized by subtracting the synthesized normal image from the
target image.

1 Introduction

The number of deaths due to cancers (malignant neoplasm) in Japan accounts for
about 28.7% of the total number of deaths. Additionally, deaths by lung cancers
account for about 19.7% of all cancer deaths [1]. To decrease the number of
people killed by lung cancers, it is important to find lung cancers early and take
proper medical cares.
Plain chest radiographs are often used in group medical examinations and
routine physical examinations. It is hard for doctors to detect lung nodules at
an early stage by visual inspections because lung nodules are hidden by normal
structures or organs such as bones and soft tissues. Therefore, computer-aided
diagnosis (CAD) systems for plain chest radiographs are widely used to enhance
the lung nodules visually.
A representative method among CAD systems using plain chest radiographs is
temporal subtraction (TS) based methods [2]. The TS-based methods can detect
differences between a past chest radiograph and a current chest radiograph. And

Y.S. Kim et al. (eds.), Advanced Intelligent Systems, 147


Advances in Intelligent Systems and Computing 268,
DOI: 10.1007/978-3-319-05500-8_14,  c Springer International Publishing Switzerland 2014
148 Y. Tsunoda et al.

then, the regions changing over time are enhanced as lesions. Since the subtrac-
tion is performed between the past chest radiograph and the current one for the
same patient, the TS method tends to produce less false alarms. However, the TS
methods cannot be applied to initial visitors because it requires the past chest
radiograph of the patient himself. Therefore, similar image subtraction method
that doesn’t need the past chest radiograph has been studied [2]. This method
uses a normal image of others instead of the patient’s past radiograph. The one
normal image is selected from lots of normal images of others, and deformed to
fit the inspecting patient. Therefore, it is hard to obtain a quite similar image to
detail. The purpose of this study is to enhance the lung nodules by synthesizing
a pseudo-normal image from a database containing other patients.

2 Related Works

Lung nodules in a chest radiograph don’t have clear boundary, and are hard to be
recognized. Therefore, methods to enhance nodules are studied actively. In this
section, several methods for enhancing lung nodules in a plain chest radiograph
are summarized and discussed with respect to their characteristics.

2.1 Energy Subtraction (ES) Based Method

The ES method requires two radiographs for the same patient under differ-
ent energy distribution, and divides two different tissues individual image. This
is caused by fact that radiograph absorption characteristic varies according to
substances. This phenomenon makes it possible to reveal or erase a certain spe-
cific substance. By taking advantage of this characteristic, one chest radiograph
including only calcification nodules, such as a bone tissue (bone image), and
another chest radiograph including only soft tissues can be obtained. Since soft
tissue image is eliminated bone tissues from a whole image, soft tissue image
is used in diagnosis of a solitary tumor nodule and lung cancer nodules that
overlaps bones.
However, because energy subtraction device is very expensive so far, penetra-
tion rate in ordinary hospitals is quite low. And this device also impose large
burden on examinees by increasing the number of radiation exposure. Further-
more, indeed device can eliminate nodules of bones, but it is still hard to detect
lung cancer nodules hidden by nodules of blood vessels.

2.2 Temporal Subtraction (TS) Based Method

the TS method uses two chest radiographs of the same patient to subtract [2].
These chest radiographs are called a past chest radiograph and a current chest
radiograph. It is possible to reduce normal structures such as pulmonary vessels
and rib bone, and emphasizes lesion regions that changed over time. Since it
subtracts in same patient, this method is less like to produce noises.
Pseudo-normal Image Synthesis from Chest Radiograph Database 149

However, the TS method demands the past chestradiograph it is have no lung


nodules. Therefore, initial visitors and visitors of group medical examination
cannot use it because they haven’t the past chest radiographs. So, new method
using only current chest radiographs have been studied.

2.3 Contralateral Subtraction (CS) Based Method

This method proposed by Li et al [3],[4]. is a method to emphasize lung nod-


ules by using fact that lung of human is approximately symmetric. At first, rib
cage boundary is detected in a chest radiograph. Next, detected lung region
determined by rib cage boundary is translated and rotated to match midline de-
rived from rib cage with perpendicular center of image. Then, a mirror-reversed
image is generated according to perpendicular center. Finally, it is possible to
emphasize part of cancer nodule by inspecting subtraction between original tar-
get chest radiograph and mirror-reversed image. However, because position of
normal structures such as organs differs slightly by left and right in actual, sub-
traction image includes a lot of normal structures.

2.4 Similar Image Subtraction

Similar image subtraction is a method which uses similar other person’s chest
radiograph instead of the past rediograph of same examinee to detect lung can-
cer nodules by using only one current chest radiograph as same as contralateral
subtraction. In similar image subtraction, database that is composed of normal
chest radiographs obtained from a lot of people is prepared beforehand. First,
a similar image to target image is found automatically among database. Next,
similar image is non-linearly deformed to match with target image more.
Then, subtraction image between target image and deformed image is generated
Oda et al [5],[6]. selected 4,000 images according to age and sex from database.
4000 images were narrowed to 100 images according to area and height of lung.
Finally, most similar image was found from 100 images. However, similar image
which is constructed by only one database image does not fit at some parts.

3 Methods

3.1 Lung Region Extraction Processing

Extraction of lung region is necessary in order to configure detection target


region of CAD system. We use to lung region extraction a large concentration
change due to difference in X-ray transmission between peripheral region and
lung field [7].
When it calculates quadratic differential value of concentration profile with
respect to vertical direction and horizontal direction, the point indicating mini-
mum value is edge of lung field side.
150 Y. Tsunoda et al.

3.2 Algorithm of Normal Image Synthesis

We synthesize pseudo-normal image corresponding to target image in order to


subtract in this paper. Pseudo-normal image synthesis demands lots of plain
normal chest radiographs as database images. These images are already diag-
nosed by doctors and they have no lung nodules. The large number of images
and wide range of age or sex are desirable.
This algorithm cuts a target chest radiograph to decide similar local regions.
The origin is at upper left of target image, and target image patch is cut around
there. We decide the most similar region to target image patch in the search
area within database image, and cut similar image patch which is same size as
target image patch. This search area within database image is centered at the
origin and it have bigger than the size of target image patch.
We repeat this process all database images, and decide similar images patch
in each database image. Most similar patch has the highest degree of similarity
among all similar image patches.
Most similar image patch of all target image area is decided while moving
the center of target image patch. Fig.1 shows the flow of the most similar image
patch decision. Each most similar image patch has overlapping regions. Pseudo-
normal image is constructed by synthesizing target image from all most similar
patches.

3.3 Lung Nodule Emphasis


In general, intensity of lung nodule is higher than normal region. Therefore,
difference between synthesized normal image and target image is positive. In
this paper, lung nodule emphasize by linear transform to subtracted image.

4 Experiments and Results


4.1 Database Images
87 normal chest radiographs of Standard Digital Image Database constructed by
Japanese Society of Radiological Technology [8] were used as database of normal
chest radiographs. In addition, 50 images were used as target images. Database
images and target images has already been diagnosed by doctors whether a
lung cancer is found or not. The number of normal images of database is 93
in actual. However, 6 of 93 images (JPCNN003, 007, 044, 051, 077, and 083)
contain unusual objects such as a nodule of medical equipment and sewing works.
Therefore, we used 87 images as database.
Lung region extraction processing in this experiment cut rectangle including
midline and lung region from normal image. Then, sizes of all rectangles were
scaled matrix size 2048 2048.
Pseudo-normal Image Synthesis from Chest Radiograph Database 151

Fig. 1. Detection of most similar patch

4.2 Degree of Similarity

In this paper, we use normalized correlation coefficient R to calculate degree


of similarity at template matching. If inner product is calculated between with
one’s fellow vectors, correlation coefficient R is cos θ. The following, the equation
of normalized correlation coefficient R is shown.


(I(i, j) − I) ∗ (T (i, j) − T )
i j
R =     (1)
(I(i, j) − I)2 (T (i, j) − T )2
i j i j

Therefore, the range of R is -1 to 1. If value of R is near 1, this image has pos-


itive correlation, and if value of R is near -1, this image has negative correlation.
Normalized correlation coefficient is able to stably calculate degree of similarity
even if brightness has ups and downs. Therefore, it can use position adjustment
of two images, even if images have difference of contrast or brightness.
152 Y. Tsunoda et al.

4.3 Contrast

We calculated contrast about around lung nodule region to evaluate quality of


lung nodule emphasis numerically. Contrast C is defined by following equation
[9].


C= Dx (i, j)2 + Dy (i, j)2 (2)
i j

Dx , Dy is first derivation value calculated with respect to vertical direction


and horizontal direction. We defined that contrast of target image is CT , contrast
of subtracted image is CS . If CS is larger than CT , lung nodules emphasis is
succeed.

4.4 Experimental Results


Pseudo-normal Image Synthesis. All of database images are 2048 × 2048
[px] (12 bit). In experiment, we resized database images to 256 × 256 [px]
(8 bit) to reduce a computational time.
The parameters to construct simulated normal image were set as follows: The
patch size cutting from target image was set as 45 × 45 [px], and search area
were 51 × 51 [px]. Central coordinate of target image patch for trimming away
was moved every 4 [px]. Next, we got a subtraction between target image and
synthesized normal image, and lung nodule was detected.
The mean of the most similar image patch is matched the mean of the target
image patch. Most similar image patches have overlapping regions. Their regions
were synthesized by using average intensity of pixel overlapping regions. Fig.2
shows a part of experimental results. This experiment could synthesize non-
nodules images, but noise besides lung nodule remained on subtracted images.

Comparison of Contrast. We subtracted target images from synthesized nor-


mal images and emphasized lung nodules. In addition, we compared contrast of
target images to contrast of subtracted images. Target images and subtracted
images were emphasized by linear transform. Fig.3 shows a part of this experi-
ment image. Results of this experiment, contrast of all subtracted images were
larger than contrast of all target images.
Fig.4 is graphs showing intensity of around lung nodule each subtracted image
and target image. Then, we matched peak of each graph.

5 Discussion
We proposed synthesis normal image method and obtained normal image which
has no lung nodule from target image. However, many noises besides lung nod-
ule remained on subtracted images. Synthesized normal images are indistinct
Pseudo-normal Image Synthesis from Chest Radiograph Database 153

(a)

(b)

(c)

Fig. 2. Result of normal image synthesis. Fraom left to right, target chest radiograph,
synthesized normal chest radiograph, target image with linear transform, subtracted
image with linear transform.

(a) (b)

(c)

Fig. 3. Result of comparing contrast around lung nodule. target chest radiograph (left),
subtracted image (right).
154 Y. Tsunoda et al.

(a)

(b)

Fig. 4. Graph of contrast. This graphs shows contrasts around lung nodule. Solid
line shows target chest radiograph, dotted line shows subtracted chest radiographs,
respectively. (a) Concentration profile of result Fig.3a, (b) Concentration profile of
result Fig.3c

because the images were synthesized from most similar image patches by cal-
culating their mean. Synthesized normal image affects the result accuracy of
detecting lung nodules. It is necessary for us to improve normal image synthesis
process and degree of similarity. In addition, the number of database image is
very low at 87. Some existing method uses 14,564 for database image. To increase
database images is necessary because prospect which contains more similar patch
is increased. Peak of the graph is lung nodule as shown in Fig.4 (a). Therefore,
lung nodules are emphasized in subtracted images. However, some graphs have
more than one peak as shown in Fig.4 (b). These graphs result if target images
have lung nodules smaller than 15 mm in size or lung nodules overlap with nor-
mal regions. We have to remove noise using features of lung nodules (degree of
circularity, size and so on) to emphasize only lung nodules.
Pseudo-normal Image Synthesis from Chest Radiograph Database 155

6 Conclusion

In this paper, we have proposed CAD system using one plain chest radiograph
image to support doctor’s diagnosis. In addition, synthesized normal images
have been constructed by our proposal method in experiment. Then, it has
been possible to emphasize lung nodules about all target images. We detect
automatically lung nodules and compare temporal subtraction or contralateral
subtraction in the future.

References
1. Ministry of Health, Labour and Welfare: Vital statistics of Japan (2012)
2. Oda, N., Kido, S., Shouno, H., Ueda, K.: Development of Computerized System
for Detection of Pulmonary Nodules on Digital Chest Radiographs Using Tempo-
ral Subtraction Images. Institute of Electronics, Information, and Communication
Engineers J87-D-II(1), 208–218 (2012)
3. Li, Q., Katsuragawa, S., Doi, K.: Imoroved contralateral subtraction images by use
of elastic matching technique. Medical Physics 27(8), 1934–1942 (2000)
4. Harada, Y., Kido, S., Shouno, H., Kakeda, S.: A Contralateral Subtraction Scheme
for Detection of Pulmonary Nodules in Chest Radiographs. IEICE Technical Re-
port MI2009-55, 1–6 (2009)
5. Oda, N., Aoki, T., Okazaki, H., Kakeda, S., Kourogi, Y., Yahara, K., Shouno, H.:
Development of Computerized System for Selection of Similar Images from Differ-
ent Patients for Imagte Subtraction of Chest Radiographs. JSMBE 44(3), 435–444
(2006)
6. Aoki, T., Oda, N., Yamashita, Y., Yamamoto, K., Kourogi, Y.: Usefulness of
comquterized method for lung nodule detection on digital chest radiographs using
similar subtracted images from different patients. European Journal of Radiology 81,
1062–1067 (2012)
7. Ishida, T., Katuragawa, S., Fujita, H.: Handbook of medical imaging, pp. 594–595.
Ohmsha (2000)
8. Japanese Society of Radiological Technology: Standard Digital Image Database:
Chest Lung Nodules and Non-nodules (1998)
9. Rich, R.: Image Contrast, Complexity, and Stability. Computer Vision Graphics and
Image Processing 26(3), 394–399 (1984)
Low-pass Filter’s Effects on Image Analysis Using
Subspace Classifier

Nobuo Matsuda1,*, Fumiaki Tajima2, Naoki Miyatake3, and Hideaki Sato4


1
Dept. of Electronic and Mechanical Engineering, Oshima National College of Maritime
Technology, Japan
1091-1, Komatsu, Suo-oshima-cho, Oshima-gun, Yamaguchi-ken 742-2193, Japan
matsuda@oshima-k.ac.jp
2
Education and Human Science, Yokohama National University, Japan
Hodogaya, Yokohama 240-8501, Japan
tajima@ynu.ac.jp
3
Chiba Institute of Science, Japan
4
Federation of National Public Service Personnel Mutual Aid Association, Japan

Abstract. This paper shows an effect for applying a low-pass filter on the
performance of image analysis using the Subspace classifier. The feature
extraction was firstly based on three kinds of intensity distributions, and the
feature vector and subspace dimension for recognition were examined.
Afterwards, a series of the analysis on the accuracies were conducted in the
cases of filtered images and without filtered. The analyzed accuracies by using
the Subspace classifier were also compared with the results by the technique of
another: Learning vector quantization (LVQ).

Keywords: Subspace Classifier, Feature Space, Low-pass Filter, Learning


Vector Quantization, Fundus Image.

1 Introduction
Up to now, from fundus images attempts to detect the early stage of glaucoma by
image processing techniques have been proposed [1-2]. There are several
conventional methods for classification such as hierarchical clustering, Self-
organizing maps, and EM algorithm.We have proposed a method for fundus diagnosis
using the Learning vector quantization(LVQ) [3] which is a supervised learning [4].
The Support vector machine (SVM) [5] is often adapted in research recent papers.
A glaucoma diagnosis method using data mining technique has been proposed by
Nishiyama et al.[6]. In fact the assessment of accuracy, however, is difficult and its
performance generally depends on the choice of parameters and data distributions
even when the technique such as the SVM with its high recognition accuracy is
employed.
We aimed at the fact that Subspace classifier [7] has simple parameter selection as
well as high classification performance. Hence we have proposed the image analysis
*
Corresponding author.

Y.S. Kim et al. (eds.), Advanced Intelligent Systems, 157


Advances in Intelligent Systems and Computing 268,
DOI: 10.1007/978-3-319-05500-8_15, © Springer International Publishing Switzerland 2014
158 N. Matsuda et al.

using the Subspace classifier [8].In this paper, we describe optimal feature space, and
focus on the effect on the classification when the low-pass filter was applied for
preprocessing in the image analysis.

2 Subspace Classifier Method

Let N be the number of pattern space dimension which is the number of pattern
vector’s elements. Let ϕi be reference vectors which are normal and orthogonal. Let
r be the number of reference vectors and let x be an input vector. The similarity S is
defined as
r
S =  ( x, ϕi ) 2 . (1)
i =1
If the vector ϕ i and x are not normalized, the similarity is defined as the following
general equation.
r
S =  ( x, ϕi ) 2 / x ϕi .
2 2
(2)
i =1

The reference vectors are defined for each category and the similarity S is also
calculated for each category. A category which should be determined as an answer is
a category which has the maximum similarity. Here, note that the number r is the
dimension of the spanned space by ϕ i , while the number N is the total space’s
dimension.

3 Experimental Data and Analytical Method


3.1 Input Data

A series of experiments was conducted with fundus images produced by a clinical


doctor. The total number of images was 133: 91 normal subjects and 42 abnormal
ones. Colored fundus photographs of 24 bit RGB bitmaps as shown in Figure 1(a)
were acquired with a scanner.
The data used in our experiments was intensity values of fundus images. The
intensity plane of 2-D image was partitioned into 24 channels as shown in Figure
1(b). The mean was computed in each of these 24 channels in the intensity domain.
The intensity plane partitioning was uniform along the angular direction (equal step
size of 15 degrees) and uniform along the radial direction (equal step size of 10 dots).
When the input data were prepared in this way, the minimum dimensionality of input
dataset was 24 and its maximum was 120.
The input data is divided into datasets with three different dimensionalities. The
first dataset is a ring region with 24 channels. The number of these datasets is 5 and
each number is marked in the direction from inside to outside. The second datasets is
called as a zone region with 48 channels and are made from two rings adjacent. The
Low-pass Filter’s Effects on Image Analysis Using Subspace Classifier 159

number of these datasets is 4 and each zone is also numbered in the direction from
inside to outside. The third input dataset has 120 channels and is made from five
rings.

3.2 Gaussian Filter


Gaussian filters are a class of linear smoothing filter with weights chosen according to
the shape of a Gaussian function. For image processing, the two-dimensional zero-
mean discreet Gaussian function
i2 + j 2
G (i, j ) = exp(− ). (3)
2σ 2
is used as a smoothing filter. Where the Gaussian spread parameter σ determines the
width of Gaussian. A large σ implies a wider Gaussian filter and greater smoothing.

Disc

Cup

(a)

(b)
Fig. 1. (a) Fundus image: excavation cup and optic nerve disc, (b) Channel Configuration
160 N. Matsuda et al.

3.3 Approach

Firstly the experiments on feature extraction were conducted from the intensity values
with three kinds of channel numbers: 24-D, 48-D and 120-D. The best ones among
these features were then selected by the highest accuracy. After determining the best
feature for classification, the experiments on classification accuracy were conducted
by Cross-validation.

Feature Extraction. While varying the dimensionality of spanned subspace, the


similarities of each class were calculated by Eq. (1) from input datasets. Optimal
feature and dimensionality of spanned subspace were determined from the maximum
accuracy.

Classification Performance. For feature data selected by experiments on the feature


extraction, the accuracy of recognition by the Subspace method was determined by
Cross-validation test.
Cross-validation is performed in order to determine the classification accuracy for
test datasets and it is a measure of classification performance for unknown data.The
Cross-validation test used was 10-fold Cross-validation. 10-Fold Cross validation is
calculated by the following ways. First, the dataset is divided in two subsets
randomly. One of subsets is included 90% of all dataset and defined as training set,
the other is included 10% of all dataset and defined as testing set. Next, learning rules
is obtained by learning of training set, validating of testing set is conducted with
learning rules. Consequently, this process is repeated 10 times.
It is necessary to be examined on the accuracy, sensitivity and specificity
simultaneously when thinking with the accuracy of classification. In general, the
relation between sensitivity and specificity is trade-off one, when one side rises and
the other side descend

4 Analytical Result

4.1 Feature Extraction

Examples of covariance matrix for normal subjects and abnormal ones are shown in
Figure 2(a). Covariance matrix of the two classes are different, it can be seen that the
data sets will be classified well. Examples of eigenvalues and their corresponding
cumulative proportion are also shown in Figure 2(b). Eigenvalues reduce rapidly with
the increasing of dimensionality r and are very small for r > 3. A cumulative
proportion reaches a value of 0.99 when the dimensionality r is about 4.
Figure 3 shows the results of classification for learning data of zone regions when
spanned space dimensionality r varying from 1 to 48.Table 1 summarizes the results
of maximum recognition rate for the training data when the dimension r of spanned
space was varied from 1 to maximum value. These figure and table show that
classification accuracy for the training data is up as the increase of dimensionality of
Low-pass Filter’s Effects on Image Analysis Using Subspace Classifier 161

spanned space in any regions. The feature data with 48 channels is suitable for
classification from these experiments of feature extraction. From above result, cross-
validation test was performed about four zone data.

(a)

(b)
Fig. 2. Covariance matrixes and corresponding eigenvalues and cumulative proportion:
(a) Covariance vectors; Normal (class=1) and Abnormal (class=2), (b) Eigenvalues and
Cumulative proportion

Table 1. Accuracies [%] and Number of Channel

Channels Re.1 Re.2 Re.3 Re.4 Re.5

24 87.2 87.2 88.7 87.2 89.5


48 97.0 97.0 97.0 96.2 -
120 97.0
162 N. Matsuda et al.

4.2 Analytical Results

Cross-validation. Figure 4 shows the results of cross-validation for testing feature


data of four zone regions. The Cross-validation shows that the variation of overall
accuracy with respect to the dimensionality is small. Almost all zone data attains the
maximum of accuracy when the dimensionality r is a value of 2. Table 2 lists the
maximum of accuracy for all zone data.
On the other hand, the values of the specificity and sensitivity depend on values of
dimensionality considerably. One can see that a large dimensionality reduces the
value of sensitivity remarkably and it also reduces generalization ability.
It is possible that a lack of training data for abnormal subjects causes the singular
matrix or over training can be taking place during learning process. Since a detection
of abnormal subjects is important in real clinical diagnosis, the number of spanned
dimension should be the value of the range of 2 to 6 in the image analysis.

Table 2. Performances [%]

Zone r Accuracy Specificity Sensitivity


1 2 70.7 66.7 72.5
2 2 71.4 69.1 72.5
3 2 74.4 73.8 77.7
4 6 71.4 52.4 80.2

Fig. 3. Accuracies of zone data against dimensionalities r


Low-pass Filter’s Effects on Image Analysis Using Subspace Classifier 163

Fig. 4. Cross-validation: (a) accuracy, (b) specificity, and (c) sensitivity

Comparison with Other Methods. In order to evaluate the classification


performance using the Subspace classifier, these results were compared with the
Cross-validation results by other methods. The Learning vector quantization method
was consequently selected as a compared ones. That is why LVQ has a high
computational performance and a good handling of parameters through our
experiments of fundus image analysis. For the classification, we used LVQ_PAK
program package Version 3.1 [9], from which we used the program LVQ2. The main
parameters used in our experiments were the same parameters Reference [8].
Figure 5 shows the results of classification using LVQ2 for testing datasets of the
third zone region when the number of prototype NOC varying from 2 to 10.
Maximum accuracy was 74.1%, and specificity, sensitivity were 88.1% and 43.3%,
respectively, at NOC=7.
164 N. Matsuda et al.

In the Subspace classifier method, on the other hand, accuracy, specificity and
sensitivity were 74.4%, 77.2%, and 73.8%, respectively, at r = 2 from Table 2.
Comparing the two methods, one can see that the performance of Subspace classifier
is superior in accuracy and sensitivity; the value of sensitivity was especially high.

Fig. 5. Cross-validation using LVQ2

Effects on Analytical Performance. Table 3 lists the results of the Cross-validation


using the Subspace classifier for two cases: one is to be applied to the low-pass filter
as a preprocessing and the other is to be unapplied to the low-pass filter. From this
table, one can see that specificity decreased from 77.2% to 76.2% , but sensitivity
increased from 73.8% from 76.2%, and overall accuracy attained 77.1% from 74.4%,
was consequently improved about 3.5% at σ = 1.

Table 3. Low-pass filter’s effect [%]

Filter Accuracy Specificity Sensitivity


None 74.44 77.17 73.81
σ=1 77.05 76.22 76.22
σ=2.5 77.44 81.32 69.05

When σ = 2.5, one can also see that sensitivity decreased from 73.8% to 69.1%, but
specificity changed from 77.2% to 81.3%, and accuracy increased from 74.4% to
77.1%. Hence the classification performance was finally improved about 4.0%. When
blur amount increases, the accuracy increases, but sensitivity decreases. Since the
value of sensitivity is very important for fundus diagnosis, we will need to search for
the optimal amount of blur as future works.
Low-pass Filter’s Effects on Image Analysis Using Subspace Classifier 165

5 Conclusion

The classification performance on the image analysis using the Subspace classifier
was examined. The following results were obtained from a series of experiments: the
feature extraction, cross validation test, filtered preprocessing and comparison with
the method by using the LVQ.

1. The Cross-validation of image analysis using Subspace classifier shows a high


accuracy for learning datasets. The Subspace classifier method has generalization
discrimination equal to or greater than the LVQ method.
2. The Cross-validation for the feature data without filtered preprocessing provided
that maximum accuracy was 74.4%, its specificity and sensitivity were 73.8% and
77.7 %, respectively.
3. When varying in the range of 1 to 2.5 parameter σ of the Gaussian filter, the
classification accuracy was improved from 3.5 to 4.0%. The preprocessing by the
low-pass filter was effective to improve the classification performance.

References
1. Tajima, F., Miyatake, N., Sato, H., Matsuda, N.: Japan Un-examined patent Kokai No.
253796 (2005)
2. Tajim, F., Chen, Y., Miyatake, N., Sato, H., Matsu-da, N.: Analysis of Eyeground Images
for Diagnosis of Eyeground Diseases (1) Pseudo Three Dimensional Image of Optic Nerve
Nipple Part and its Conversion to Locally Planar Inclination Image. In: 20th Fuzzy System
Symposium Proceedings, p. 50 (2004) (in Japanese )
3. Kohonen, T.: Self-Organizing Maps. Springer Series in Information Sciences, vol. 30
(2001)
4. Matsuda, N., Laaksonen, J., Tajima, F., Miyatake, N., Sato, H.: Comparison with Observer
Appraisals of Fundus Images and Diagnosis by using Learning Vector Quantization. In:
23th Fuzzy System Symposium Proceedings, pp. 415–418 (2007) (in Japanese)
5. Cortes, C., Vapin, V.N.: Support vector networks. Machine Learning 20, 273–295 (1995)
6. Nishiyama, H., Hiraishi, H., Iwase, A., Mizoguch, F.: Design of Glaucoma Diagnosis
System by Data Mining. In: 3A1-4 The 20th Annual Conference of the Japanese Society for
Artificial Intelligence (2006) (in Japanese)
7. Watanabe, S., Pakvasa, N.: Subspace method of pattern recognition. In: 1st International
Joint Conference of Pattern Recognition Proceeding, pp. 25–32 (1973)
8. Matsuda, N., Laaksonen, J., Tajima, F., Miyatake, N., Sato, H.: Fundus Image Analysis
using Subspace Classifier and its Performance. In: Proceedings of the Joint 5th International
Conference on Soft Computing and Intelligent Systems and 11th International Symposium
on Advanced Intelligent Systems, pp. 146–151 (2010)
9. Kohonen, T., Kangas, J., Laaksonen, J., Torkkala, K.: LVQ-PAK: The Learning Vector
Quantization Program Package. Helsinki University of Technology, Finland (1995)
A New Outdoor Object Tracking Approach
in Video Surveillance

SoonWhan Kim and Jin-Shig Kang*

Dept. of Tele-Communication Eng., Jeju National University


{soonkim,shigkj}@jejunu.ac.kr

Abstract. In this paper, a modified expansion-contraction algorithm of mobile


object tracking for outdoor environment is studied. Object tracking in an
outdoor environment is different from indoor, and modification of the algorithm
is required. A new method of object extraction and a new background updating
algorithm is presented. These two methods are minimizing the effects of
changes of lighting conditions. Nevertheless, the basic algorithm using
expansion-contraction of object window is maintained, and moving objects can
be tracked efficiently through simple operation. To show the effectiveness of
the proposed algorithm, several experiments were performed on a variety of
scenarios, and three of them are includes in this paper. Performance of the
proposed algorithm is maintained with dramatic changed in lighting conditions.

Keywords: object tracking, mobile object tracking, video surveillance,


expansion-contraction algorithm.

1 Introduction

Recently, there are many results studying the mobile object tracking in the video
surveillance [1]. Typical results for tracking moving objects are Kalman filtering
algorithm [2, 3], particle filtering algorithm [4, 5, 6] and mean shift/ cam shift
algorithm [7, 8]. Q. Zhou, J.K. X Lie et al [2] propose an algorithm of feature-based
using Kalman filter motion to handle multiple objects tracking. This paper uses
Kalman filter to establish object motion model, using the current object’s information
to predict object's position, so that we can reduce the search scope and search time of
moving object to achieve fast tracking. Cory Miller, et al [3] present a modified
Kalman filter estimator of object location and velocity with robustness to
measurement occlusion and spurious measurements. S. Sa¨rkka et al [4] propose a
new Rao-Blackwellized particle filtering based algorithm for tracking an unknown
number of targets. The algorithm is based on formulating probabilistic stochastic
process models for target states, data associations, and birth and death processes. M.
Jaward et al [5] extends an algorithm on a single object tracking using particle filters
to multiple objects. Comaniciu, D. et al [7] suggests that effective image analysis can be

*
Corresponding author.

Y.S. Kim et al. (eds.), Advanced Intelligent Systems, 167


Advances in Intelligent Systems and Computing 268,
DOI: 10.1007/978-3-319-05500-8_16, © Springer International Publishing Switzerland 2014
168 S.W. Kim and J.-S. Kang

implemented based on the mean shift procedure. Comaniciu, D. et al [8] present a new
paradigm for the efficient color-based tracking of objects seen from a moving camera. The
proposed technique employs the mean shift analysis to derive the target candidate that is
the most similar to a given,target model, while the prediction of the next target location is
computed with a Kalman filter. Aggarwal [9] presents object tracking in an outdoor
environment which integrates spatial position, shape and color information to track
object blobs. G. L. Foresti [10] treats a visual surveillance system for remote
monitoring of unattended outdoor environments. The system can automatically detect,
localize, track, and classify multiple objects.
In this paper, a modified expansion-contraction method of object tracking for
outdoor environment is presented. The expansion-contraction algorithm presented in
[11] is summarized briefly. And present a new object extraction method and
background update algorithm, which are main contributions of this paper. To show
the effectiveness of the proposed algorithm, several experiments were performed on a
variety of scenarios, and three of them are includes in this paper. Performance of the
proposed algorithm is maintained with dramatic changed in lighting conditions.

2 Problem Formulation and Some Definitions

In this section, whole process of object tracking presented in this paper is described.
Overall system flow is described, and algorithm for updating background image is
suggested. Also a method of extension and contraction of the object window and
selection of object by color information are described.

2.1 Summery of Object Tracking Procedure Presented in [11]


The overall process of object tracking is shown in Fig. 1. The first step is the
initialization process in which the initial position of target object is computed and the
extended initial object window is selected. Also select ∆ ∆ , ∆ which is
the initial value of the variation of the center of mass point of the target object, and
compute the predicted center of mass position ̂ , for next frame. Go to first
frame. The second step is extract sub-image from background frame and current
frame. In this operation, the center is the predicted center of mass position ̂ ,
and the size of window is three or four times of object window previously selected.
The next step is calculate absolute difference for two sub-images obtained previous
step and convert it into binary image by using trash-hold operation. The fourth step is
calculating diag(IIT) and diag(ITI), contracting extended object window to object
window, and extracting target object. And at this step, the area of target object, the
actual center of mass position , , the extension and contraction parameter
are calculated. In the final step, the predicted center of mass position
̂ , is computed, and go to next frame.
A New Outdoor Object Tracking Approach in Video Surveillance 169

Fig. 1. The overview of the system flow

2.2 Extension and Contraction of Object Window


The center of mass position , for kth frame is described by
∆ (5.a)
∆ (5.b)
where, , are noise terms. For (k+1) th
frame, the predicted center of mass
position ̂ , is

_ ∆ (6.a)
∆ (6.b)
th
For the case of multiple target tracking, the predicted position of the j object is

̂ ∆ ̂ (9.a)
1 (9.b)
∆ ̂

The calculating predicted center of mass point for target object adopted in this paper
is very simple and enough for target tracking. Of course, the Kalman filtering method
or the particle filter algorithm is also available instead of equation 5. Because of the
actual operation is performed on Ix and Iy -axis not on the image frame, operation of
extension and contraction of object window is very simple. The same operation on Ix
and on Iy is shown in middle and bottom figure respectively. The operation procedure
is consisted by two steps which are extend and contract the object window on Ix axis
and then on Iy axis.
170 S.W. Kim and J.-S. Kang

During the contraction operation, the extension and contraction parameter, ECpar,
plays important role. The extension and contraction parameter, ECpar., greater than 1
and it takes 2 when the ratio of the object area of the total area of object window is
50%. Also, it takes 3 when the ratio is 30%. If the extension and contraction
parameter near to 1, this means the object is too large compared to the object window,
and the parameter takes a value of near 3 or 4, the object is very small compared to
the object window. Thus, it is reasonable that the value of , ECpar. variable maintains
about 2. When the value of the ECpar is near to 1, it is required that the object window
must be extended, and much greater than 2, the object window must be contracted. In
order to maintain the performance of the system, the appropriate ECpar value is about
1.5 to 2.

3 The Objects Tracking in an Outdoor Environment

3.1 Modified Algorithm Extracting Ttarget Object

Light conditions for outdoor environment are not maintained, and obtained images are
affected by highlight and shadow. The difference image between the current frame
and the background frame includes many noise sources. By these reasons, it is
required to modify the algorithm extracting target object. To remove the background
from the current frame effectively, the proposed method presented in this paper can
be expressed in two ways. The one is modify the background image frame according
to the change of environment, which will be discussed in detail in the following
subsection. And the other is to modify the target extraction algorithm. The target
extraction method suggested in [11] is start from obtaining binary image using
difference image frame between the current frame and the background image.
However, if the time difference between background frame and current increase is
big, there will be full of noises in obtained difference image. The difference image for
obtaining a binary image can be obtained from the following equation.

B I ,I B IK , I

where, B I , I is the binary image of absolute difference image between Ik frame


and background frame, and means logical and operator. B I , I is obtained by
computing absolute difference of two images current frame I and background
frame I , then transform the resulting image into gray, and compute binary image
by using trashhold operation. Elements of the matrix B I , I and B IK , I
consisted of ones and zeros and we can compute and-operation of these two matrices.

3.2 Updating Background Image


For outdoor application of mobile target tracking, it is very important to update the
background image, because of the abrupt change of the lighting condition, cloud,
shadow and unexpected appearance of other objects. Figure 2 consisted by figures
A New Ou
utdoor Object Tracking Approach in Video Surveillance 171

captured approximately 0.171 sec intervals started from 18:41. As shown figurre2,
light conditions are changeed dramatically, and if we do not update the backgrouund
frame, the binary image willl be filled with noises.
Background updating alg gorithm suggested in this paper is very simple and whicch is
T first step is, for kth frame Ik, finding elements
consisted only two steps. The ,
of the matrix B I , I B IK , I with value 0. The second step is ,
elements of background frrame are replaced by , elements of the kth fraame
, .

B I ,I B IK , I

Figure 2 and 3 are a simp ple example of the proposed algorithm. The left figuree of
Fig. 3 consisted by binary images
i using fixed background image, and the right figgure
consisted by binary imagees using updated background image. For the case of the
background update, the noise
n was significantly decreased. If update cycle of
background frame becomess short then the noise due to environmental change is vvery
small, but the computation time will be increased. Therefore it is important to chooose
the right update cycle whicch is affected by the changes in light conditions, weathher,
cloud, etc. The update cyclle of background frame to be short when weather channges
are excruciating day or dawwn hours, in the evening, but we can choose backgrouund
frame refresh rate be largeely when there are little weather changes during the day
time.

Fig. 2. Images with light condition changed


172 S.W. Kim and J.-S. Kang
K

Fig. 3. Binary images obtain


ned with not background updated (left), and with backgroound
updated (right)

4 Experiment

To demonstrate the efficien ncy of the algorithm presented, the experimental results for
three scenarios are describeed. The first experiment tracks the movement of two peoople
walking in a quiet road, th he second one tracks vehicle at the same road. The thhird
experiment tracks a motorcycle. In experiments 2 and 3, the vehicle's speed is the
speed of the medium. In this t experiment, a personal computer with CPU i7 coore,
3.50GHz clock speed, and d with 32 GB memory is used. And camera modulee is
Microsoft LifeCam 2000 is i used. For every experiment, the time interval betw ween
frames is approximately aro ound 0.157 seconds, and which takes to finish all operattion
on one frame.

4.1 Scenarios 1: Multi--human Tracking


Consider the process of tw wo peoples leave from the left side corner to get to the top
center. Fig. 4 shows initial step during the course of the experiment. In this figure, up-
left image is one of origiinal image, up-right figure is a binary image of objject
window, bottom left figurees are diag(I IT) and diag(ITI), and bottom right figures are
images of IIT and ITI respecctively. The binary image of the object window is obtaiined
by predicting center of masss for next (k+1)th image, extracting the object window for
(k+1)th image by expansion n and contraction, and applying trash-hold operation. T The
tracking result for scenarioo 1 is shown in Fig. 5 and Fig. 6. Fig. 5 consisted off 16
images, and each of which is sampled by one of 10 frames. Experiment is perform med
for every frame which are captured by 8 frames per one sec rate. The object winddow
for each frame was displayeed by a small black lined box. Fig. 6 consisted of 16 binnary
images, and each of which are binary images of object windows corresponding Figg. 5.
As shown Fig. 5 and Fig. 6, the target object tracking is performed well despite off the
target is obscured by the carr parked, and tree branches swaying.
A New Ou
utdoor Object Tracking Approach in Video Surveillance 173

Fig. 4. This figure shows initiial step during the course of the experiment. Experimental immage
(up-left), binary image of objeect window (up-right), diag(I IT) and diag(ITI) (bottom left), and
images of IIT and ITI (bottom-rright).

Fig. 5. Tracking ressults for scenarios 1. Each figures are sampled 1 of 10.
174 S.W. Kim and J.-S. Kang
K

Fig. 6. Corresp
ponding binary images of object windows for Fig 5

4.2 Scenario 2: Motor Cycle


C Tracking

In the second experimental scenario, motor cycle proceed in the opposite directionn of
he starting point, the white car is parked and which is acct as
scenario 2 is traced. Near th
an obstacle. There is a treee between white colored car and silver colored; it also aacts
as an obstacle. As shown FigF 7 and Fig 8, the motorcycle becomes ambiguous shhape
while passing through th he portion of the tree, and this brought performaance
degeneration. However, this problem can be removed by modifying program as the
size of the object window easily
e returned to its original size, shown next scenario.

4.3 Scenario 3: Vehiclee Tracking

The third experimental scennario is tracks a silver color car which is appeared near the
center and towards left-siide. Fig 9 shows initial step during the course of the
experiment. The left figuree is the original frame captured by camera and the riight
figure is a binary image off object window. Experimental results on this scenario are
shown in Fig 8 and Fig 9, and
a each of which is consisted of 16 figures sampled by one
of 5. As shown these figurees, the object window windows are selected appropriattely,
and predictions and calculations about the location of the center of the object are ddone
well.
A New Ou
utdoor Object Tracking Approach in Video Surveillance 175

Fig. 7. Tracking reesults for scenario 2. Each figures are sampled 1 of 10.

Fig. 8. Corresp
ponding binary images of object windows for Fig 7

Fig. 9. This figure shows initiial step during the course of the experiment. Experimental im
mage
(up-left), binary image of objecct window (up-right).
176 S.W. Kim and J.-S. Kang
K

Fig. 10. Tracking ressults for scenarios 2. Each figures are sampled by 1 of 5.

Fig. 11. Corresp


ponding binary images of object windows for Fig 10

5 Conclusion
In this paper, the mobile object
o tracking for outdoor environment is studied. A nnew
moving object extraction algorithm
a and background update algorithm are propossed.
The moving object is exttracted by using background frame, current frame, and
previous frame, and the bacckground frame is updated by using pixels which are saame
in current frame and preevious frame. Background updating is relatively tim me-
consuming, but the performmance can be maintained by adjusting the frequency of uuse.
In this paper, the proposedd algorithm has been proved through various experimeents
and three of them are included. Experiments using a standard PC and modificationn of
the algorithm are required, and these will be studied further research.
A New Outdoor Object Tracking Approach in Video Surveillance 177

References
1. Yilmaz, A., Javed, O., Shah, M.: Object tracking. ACM Comput. Surv. 38(4), 13–es
(2006)
2. Li, X., Wang, K., Wang, W., Li, Y.: A Multiple Object Tracking Method Using Kalman
Filter. In: Proceedings of the 2010 IEEE International Conference on Information and
Automation, Harbin, China, June 20-23 (2010)
3. Miller, C., Allik, B., Ilg, M., Zurakowski, R.: Kalman Filter-based Tracking of Multiple
Similar Objects From a Moving Camera Platform. In: 51st IEEE Conference on Decision
and Control, Maui, Hawaii, USA, December 10-13 (2012)
4. Särkkä, S., Vehtari, A., Lampinen, J.: Rao-Blackwellized particle filter for multiple target
tracking. Information Fusion 8, 2–15 (2007)
5. Jaward, M., Mihaylova, L., Canagarajah, N., Bull, D.: Multiple Object Tracking Using
Particle Filters. In: Aerospace Conference. IEEE (2006)
6. Maskell, S., Gordon, N.: A Tutorial on Particle Filters for On-line Nonlinear/ Non-
Gaussian Bayesian Tracking. In: Target Tracking: Algorithms and Applications IEE,
Workshop (2001)
7. Comaniciu, D., Meer, P.: Mean Shift Anallysis and Applications. In: IEEE Int. Conf.
Computer Vision, Kerkyra, Greece, pp. 1197–1203 (1999)
8. Comaniciu, D., Ramesh, V.: Mean shift and optimal prediction for efficient object
tracking. In: Proceedings of International Conference on Image Processing, vol. 3, pp. 70–
73 (2000)
9. Zhou, Q., Aggarwal, J.K.: Object tracking in an outdoor environment using fusion of
features and cameras. Image and Vision Computing 24, 1244–1255 (2006)
10. Foresti, G.L.: A real-time system for video surveillance of unattended outdoor
environments. IEEE Transactions on Circuits and System for Video Technology 8(6),
697–704 (1998)
11. Kang, J.-S.: A Modified Expansion-Contraction Method for Mobile Object Tracking
Approach in Video Surveillance: Indoor Environment (to be appear in AISC)
Development of a Standing-Up Motion Guidance System
Using an Inertial Sensor

Chikamune Wada, Yijiang Tang, and Tadahiro Arima

Graduate School of Life Science and Systems Engineering, Kyushu Institute of Technology
Hibikino 2-4, Wakamatsu-ku, Kitakyushu, 808-0196, Japan
wada@life.kyutech.ac.jp

Abstract. The standing-up motion consists of (1) a flexion phase, in which the
center of gravity (COG) moves forward, and (2) an extension phase, in which
COG raises upward. However, because it is difficult for elderly and disabled
people to combine both phases, they need to perform each phase individually.
Although most people who are unable to stand up are able to raise their COG
upward, they are unable to move it forward. Therefore, we proposed a system
and evaluated its efficacy in supporting forward COG movement.

Keywords: standing-up motion, center of gravity, inertial sensor.

1 Introduction

Standing up from a chair is a very important motion in daily life. The standing-up
motion is complicated because it involves a change in the center of gravity (COG)
from over the ischium to over the feet. If a person cannot transfer their COG onto
their feet, standing up from a chair becomes difficult because the muscles required for
that motion are not activated. The standing-up motion is considered to consist of (1) a
flexion phase, in which the COG moves forward as the trunk leans forward, and (2)
an extension phase, in which COG raises upward as the trunk lifts. Generally, a
healthy person skillfully uses a combination of these two phases to stand up from a
chair. However, it is difficult for elderly and disabled people to combine these phases;
therefore, they need to perform each phase individually. In addition, most people who
cannot stand up from a chair are able to raise their COG upward, but are unable to
move their COG forward and complete the standing motion. These people need assis-
tance during the flexion phase rather than the extension phase. In medical institutions,
caregivers provide assistance by pushing the patient’s trunk forward until the person
can raise their COG upward. To provide similar assistance, we would like to develop
a system to support forward movement of the COG during the standing-up motion.
To realize such a system, both the standing-up motion and the COG position need
to be measured. This is easily carried out using a force plate and a three-dimensional
motion capture system. However, because the measurement equipment is large and
expensive, daily usage of such a system can be strenuous [1]. In order to obtain ap-
propriate measurements without using complicated equipment, we hypothesized that

Y.S. Kim et al. (eds.), Advanced Intelligent Systems, 179


Advances in Intelligent Systems and Computing 268,
DOI: 10.1007/978-3-319-05500-8_17, © Springer International Publishing Switzerland 2014
180 C. Wada, Y. Tang, and T. Arima

trunk movement needed to be measured in the flexion phase so that COG movements
could be estimated using only the trunk movement data. This hypothesis is based on
the fact that only the trunk moves in the flexion phase of standing up from a chair.
Therefore, we acquired the trunk angle using an inertial sensor installed on the trunk
and calculated the COG movement in real time during standing by applying the trunk
angle to a human body model. If a patient can sense when their COG has been trans-
ferred to their feet, standing up from a chair would become easier because after being
alerted of the COG transfer, the patient would only need to perform the upward mo-
tion. Fig. 1 shows an illustration of our system in which the inertial sensor data is
wirelessly transmitted to a computer that notifies the subject of the COG transfer us-
ing visual and auditory signals. This study describes the optimal position at which the
inertial sensor should be set on the human body model to allow estimation of the
COG position. Furthermore, we report the efficacy of this COG estimation method.

Fig. 1. Depiction of the COG estimation system

2 Optimal Position of the Inertial Sensor

2.1 Experimental Setup and Procedure

We used two different human body models in this study, the three-link model and the
four-link model, shown on the left side of Fig. 2. Both models define the trunk angle
differently: In the three-link model, the trunk angle was defined as the link that con-
nected the acromion–trochanter major, whereas the angle in the four-link model, the
trunk angle was the link that connected the acromion–ilium. Furthermore, to deter-
mine the most appropriate model for estimating the horizontal COG position during
the flexion phase and the optimal position of the inertial sensor on a trunk, two iner-
tial sensors were attached to the trunk: one on the sternum and the other on the ilium.
In addition, we placed three infrared reflection markers on the acromion, ilium, and
trochanter major, (right panel of Fig. 2). Subsequently, we measured body movement
using the three-dimensional motion capture system (Detect Inc.) with four infrared
cameras sampling at 60 Hz. The measurement specifications of the inertial sensor
(Logical Product Inc.) were 300 °/s for the gyroscope and 5 g for the accelerometer.
Three healthy male subjects were asked to bend their trunks under two different con-
ditions: (1) straightening the back and (2) hunching the back, as shown in Fig. 3.
These measurements were repeated two times for each subject.
Development of a Standing-Up Motion Guidance System Using an Inertial Sensor 181

Fig. 2. Three-link and four-link models (left) and the experiment schematic view (right)

Fig. 3. Straightening the back (left) and hunching the back (right)

2.2 Experimental Results

Fig. 4 shows an example of the trunk angle results we obtained using the three-link
and four-link models, which were calculated from the motion capture data. From all
of the results, the root mean square (RMS) between the two models was 3.10°. This
value was considered to be small because measurement errors of a few degrees may
have occurred because of the infrared markers moving slightly on the skin [2]. On the
basis of these results, we decided to use the three-link model because it was easier to
operate. To determine the most suitable position for the inertial sensor between the
sternum and ilium, we calculated the RMS error (RMSE) values of the trunk angles.
The RMSE values quantify the differences between the trunk angle obtained from the
inertial sensor and those obtained from the motion capture system. The results are
shown in Table 1. We observed that when the inertial sensor was placed on the ilium,
the resulting trunk angle was significantly influenced by trunk motion (RMSE of 6.7
with a straight trunk increased to 13.67 with a hunched trunk). This was because the
ilium did not move enough while the back was hunched; therefore, the sensor located
on the ilium could not accurately measure the trunk angle. However, when the sensor
was placed on the sternum, the trunk angle did not change significantly with back
hunching. Therefore, we concluded that attaching the inertial sensor to the sternum
was optimal for this experimental setup.
182 C. Wada, Y. Tang, and T. Arima

Table 1. RMSE values of the trunk angle for each sensor position when two different trunk
motions were performed
straightening hunching
the back the back
sternum 5.87 7.84
ilium 6.7 13.67

Fig. 4. Trunk angle measurements from the three-link and four-link models, which were calcu-
lated from the motion capture data

3 Estimation of Horizontal COG Position Using Trunk


Movement Measurements

3.1 Estimation of the Horizontal COG Position

This section describes the method for determining the horizontal COG position from
the trunk angle data acquired from the inertial sensor. As shown in Fig. 5, we as-
sumed the human body to be a rigid body with three links. The loads on the foot (R1)
and the chair (R2) were estimated using an equation of motion, as shown in Fig. 5. In
addition, the horizontal COG position was estimated from the moment of force among
P1, P2, P3, R1 and R2. The rigid body model was newly made based on the results of
other studies [3, 4].

Fig. 5. The human body model and the method used to calculate the horizontal COG position
Development of a Standing-Up Motion Guidance System Using an Inertial Sensor 183

3.2 Evaluation of the Estimation Method for Horizontal COG Position


We compared the horizontal COG position that we estimated from the inertial sensor
data with that from the three-dimensional motion capture data and force plate data.
First, the standing-up motion was measured by combining the three-dimensional mo-
tion capture system (Vicon-Peak Inc.) with eight infrared cameras and four force
plates (AMTI Inc.). Twelve infrared markers were placed on both sides of the acro-
mion, iliac crest, greater trochanter, knee, ankle, and metatarsal bone. The subjects
were five non-disabled males; they were asked to stand up from a chair at two differ-
ent speeds: (1) at their usual standing speed and (2) at a speed that was slower than
usual, resembling that used by elderly people. The height of the chair was 0.42 m, and
the subjects had to decide upon a suitable foot position upon standing. Measurements
were repeated five times for each subject and each condition. Data were recorded
using cameras sampling at100 Hz.
Second, the motion capture system data were used to calculate the trunk angle and
the horizontal COG position was estimated using the trunk angle in our method. We
decided that the starting point of the horizontal COG would be the center of the foot
heel, and forward positions were given positive values.
Third, these estimated horizontal COG positions were compared with those ob-
tained after combining the motion capture system and force plate data.
Examples of results are shown in Fig. 6. The left graph of Fig. 6 shows how hori-
zontal COG position changed when the subject stood up with their usual speed, and
the right graph of Fig. 6 shows the changes in COG at a slower standing speed. The
vertical axis represents the horizontal COG position in meters and the horizontal axis
shows the time in seconds. The solid and dotted curves show the measured and esti-
mated data, respectively. The vertical dotted line shows the time at which the buttocks
left the chair and the horizontal dotted line represents the boundary at which the base
of support on foot was located. This boundary of support was defined to be −0.053 ±
0.0057 m from the heel [4].

Fig. 6. Standing up with a usual speed (left) and a slower-than-usual speed (right)

The RMSE values for estimated and calculated horizontal COG positions are
shown in Table 2. We found that the mean RMSE values associated with standing up
184 C. Wada, Y. Tang, and T. Arima

at a normal speed and standing up slowly were both approximately 0.02 m. As a re-
sult, we determined that our method estimates horizontal COG position with relative-
ly good accuracy because there were no major differences between both conditions.
We found that differences between the estimated and measured data became larger
once the vertical dotted line in Fig. 6 was crossed. In this experiment, the time when
the buttocks left the chair was defined as the time at which the value of force plate
(placed underneath the chair) became zero. However, our method was valid while the
buttocks remained in contact with the chair, but it resulted in larger differences after
the buttocks left chair.
In our system, it was important to estimate the horizontal COG position imme-
diately before it reached the base of support boundary (horizontal dotted line in Fig.
6) in order to judge whether the horizontal COG position was within the base of sup-
port. From Fig. 6b, we found that when the subject stood up slowly, the buttocks left
the chair after the horizontal COG position had completely entered the base of sup-
port. Therefore, we concluded that our system can estimate horizontal COG position
until the horizontal COG position entered the base of foot support because the but-
tocks did not leave when the subjects stood up slowly.

Table 2. RMSE values for the horizontal COG position in meters

4 Proposition of the Standing-Up Motion Guidance System and


Evaluating Its Efficacy

4.1 Proposing a Standing-Up Motion Guidance System


Considering the results of sections 2 and 3, we manufactured a trial version of the
standing-up motion guidance system. Our system comprised an inertial sensor and a
computer. The measurement specifications of the inertial sensor (Logical Product
Inc.) were 300°/s and 5 g. A software program, which was developed in Visual C#,
was used to estimate the horizontal COG position and judge whether the horizontal
COG position was inside the base of support. Then, the time elapsed as the trunk
raised upward was monitored through an instruction display, shown in Fig. 7.
Development of a Standing-Up Motion Guidance System Using an Inertial Sensor 185

Fig. 7. Display screen for the trial version of the guidance system

After the inertial sensor was placed on the user’s chest, the number of bars in Fig. 7
increased as the user leaned his/her trunk forward. In this trial version, red bars and
the instruction “Lean the trunk” were displayed when the horizontal COG position
was between 0.06 m and 0.12 m from the boundary of the base of support. When the
COG position was between 0 and 0.06 m, the interface displayed yellow bars and the
instruction “Lean a little more.” Once the COG position entered within the base of
support, blue bars were displayed and the interface gave the instruction “Raise your
trunk” along with an auditory beep.

4.2 Evaluation of our Guidance System

We used data from an electromyogram (EMG) to evaluate the efficacy of our guid-
ance system. Four experimental patterns were prepared: (1) Subjects were asked to
stand up slowly when the system judged that their horizontal COG position had not
entered the base of support; (2) Subjects were asked to stand up slowly when the sys-
tem judged that their horizontal COG position was very close to the boundary of the
base of support; (3) Subjects were asked to stand up slowly when the system judged
that their horizontal COG position was inside the base of support; and (4) Subjects
were asked to stand up without using the guidance system. Two non-disabled males
participated in this experiment and repeated all conditions five times.
During the experiment, EMG data from muscles related to the standing-up motion
were obtained. Data from the tibialis anterior (TA), rectus femoris (RF), gluteus max-
imus (GMA), and erector spinae (ES) were measured using an electromyogram (Me-
dicament Inc.). However, only muscles on the right side were measured because the
standing-up motion is generally symmetrical about the median sagittal plane. Before
experimentation, the maximum voluntary contraction (MVC) EMG data for all mus-
cles were measured, and the EMG data during the standing-up motion were norma-
lized by the EMG of MVC. We defined this normalized data as %MVC.
Results are shown in Figs. 8 and 9. The vertical axes represent %MVC and hori-
zontal axes represent the experimental pattern. For pattern (1), the subjects were not
all successful at standing up during all trials. Generally, by comparing patterns (2)–
(4), the %MVC values from pattern (2) were relatively larger than those from patterns
186 C. Wada, Y. Tang, and T. Arima

(3) and (4). Moreover, we did not find any large differences between patterns (3) and
(4). Therefore, we considered that our system was able to guide subjects to the optim-
al forward trunk position that would allow them to stand up easily.

Fig. 8. %MVC results for TA (left) and RF (right)

Fig. 9. %MVC results for GMA (left) and ES (right)

5 Conclusion

In this paper, we proposed and evaluated the efficiency of a standing-up motion guid-
ance system that informs the user of the optimal time at which their trunk should be
raised when standing up from a chair. However, there are many problems that remain
unresolved. In subsequent research, we plan to perform the following activities:

(1) Produce a method to estimate three-dimensional COG position


(2) Improve the user interface
(3) Provide significant evidence through more subjects and perform more trials
(4) Evaluate our system using patients who train standing-up motion in the clinical
site
Development of a Standing-Up Motion Guidance System Using an Inertial Sensor 187

Acknowledgements. Part of this research was conducted by Mr.Tsukasa Fujimoto.


We sincerely appreciate him.

References
1. Katsuhira, et al.: Analysis of joint moment in standing up moment with hand rail.
JSPO 19(1), 45–51 (2007)
2. Fukuda, et al.: Estimation of kinematic parameters of human skeletal model based on mo-
tion capture data. ROBOMEC 2006 2A1-D06(1) (2006) (in Japanese)
3. Matsui, H.: Determination of Center of Gravity of Human Body in Various Postures: I. Cen-
ter of Gravity Calculated with Symplified Mass Values. Taiikugakukenkyu 2(2), 65–76
(1956) (in Japanese)
4. Anthropometrical Data by the National Institute of Advanced Industrial Science and Tech-
nology (1991-1992)
A Structure of Recognition for Natural and Artificial
Scenes: Effect of Horticultural Therapy Focusing
on Figure-Ground Organization

Guangyi Ai1, Kenta Shoji1, Hiroaki Wagatsuma1,2, and Midori Yasukawa3


1
Department of Brain Science and Engineering, Kyushu Institute of Technology,
2-4 Hibikino, Wakamatsu-Ku, Kitakyushu 808-0196, Japan
{ai-kouitsu,shoji-kenta}@edu.brain.kyutech.ac.jp
2
RIKEN Brain Science Institute, 2-1 Hirosawa, Wako-shi, Saitama, 351-0198, Japan
3
Department of Clinical Nursing, College of Med., Pharm. and Health Sciences,
Kanazawa University
waga@brain.kyutech.ac.jp, midori@mhs.mp.kanazawa-u.ac.jp

Abstract. In modern societies, prevention of elderly depression becomes an


inevitable social demand. As a solution, the horticultural therapy has attracted
attention over the years. In this study, we focused an importance of the therapy
in perception-action cycle, to enhance motivation to work on the therapy,
especially when subjects interact with natural objects. As an initial step, we
investigated a visual perception process of the therapy by using spontaneous
eye movements, as an index of the subconscious curiosity and interest. Our
experimental results demonstrated a significant difference of eye movements in
natural and artificial object cases. In natural cases, the detail analyses suggest a
high motivation when interacting with complex natural materials, and the
further analysis leads a way to investigate the fundamental effect of the therapy.

Keywords: Horticultural Therapy, figure-ground separation, Gestalt


psychology, fractal structure, saccadic eye movement.

1 Introduction
Investigation of positive psychological effects or mental treatments to reduce mental
stress reactions based from the subjective feeling and recognition in human activity
has been a concerning problem [1]. An important question to address is how to
bridge/link/balance between subjective values and objective quantitative analyses, for
understanding the mechanism of ‘healing from mental stress disorder’ based on the
brain-body coordination. Various types of creative art therapies, such as
dance/movement, music, drama poetry and art (paintings) therapies [2], were
proposed and distributed widely. In this paper, we focus on the effect of horticultural
therapy (HT) to understand the mechanisms of brain-body coordination. The therapy
has multiple levels to mediate between the human and the environment via
recognition of beauty in nature, rearrangements of plants in a place, cooperating with
other coworkers for making things in a form and style, and connecting with a society
of farmers and clinical evidences report a certain effect to treat depression in the

Y.S. Kim et al. (eds.), Advanced Intelligent Systems, 189


Advances in Intelligent Systems and Computing 268,
DOI: 10.1007/978-3-319-05500-8_18, © Springer International Publishing Switzerland 2014
190 G. Ai et al.

elderly [3], but it is still difficult to analyze ‘a sense of beauty’ quantitatively and how
to reach subjective feeling and motivation for working. As a first step, we explore a
method to evaluate participant’s motivation and values during the treatment by using
eye movements, or visual attention, as an index of subconscious curiosity and interest.

2 An Analytical Point of View in Gestalt Perception


2.1 Gestalt Perception and Horticultural Therapy
Horticultural therapy has a historical context in the 1940s and 1950s, and initiated as
rehabilitative care of hospitalized war veterans and significantly expanded acceptance
of treatments for mental illness [3]. The therapeutic benefits of garden environments
have been demonstrated, and then HT practice gained in credibility and was embraced
for a wider range of diagnoses and therapeutic options. Recently, HT is considered as
a beneficial and effective therapeutic modality in a broad range of rehabilitative,
vocational, and community settings. Therefore, this method is expected to help
improve memory, cognitive abilities, task initiation, language skills, and socialization
[4]. In physical rehabilitation, HT can help strengthen muscles and improve
coordination, balance, and endurance. In vocational HT settings, people learn to work
independently, solve problem, and follow directions. In the present study, we focus on
the cognitive aspect accompanied with internal motivation and interest.
Gestalt psychology [1] is a key to study on how humans arrange discrete stimuli
into holistic perceptions. This concept presents an idea that stimuli are perceived as an
organized whole, not as unrelated or isolated pieces “the whole is greater than the sum
of the parts.” The Gestalt principles or law of perceptual organization implies that the
cognitive process in the brain naturally enhances categorization and organization of
stimuli as sensory inputs into meaningful units to make sense of the stimuli as
‘information.’ This principle deals with how individual stimuli make us perceive as a
whole to create a consistent perception and sensations. In Gestalt law, important
properties are 1) grouping principles, which is the mechanism how individual stimuli
are organized into a group of information, and 2) contextual principles, which is the
mechanism of how the surrounding environment or context access the person to help
determine one’s perceptions of stimuli in the environment.

Fig. 1. A possible relation between holistic perceptions and stages of HT. The ‘point of view’
of the subject changes depending on circumstances and internal feelings. Thus, HT offers
fields, opportunities and tools for shifting the viewpoint and raising the awareness of what
one’s doing.

In considering stages of progressive treatment in HT, we hypothesized that the


level of holistic perceptions corresponds to stages of motivation, such as a specific
A Structure of Recognition for Natural and Artificial Scenes 191

item in view, a whole picture, item arrangements with a therapist, cooperative work
with others and so on (Fig. 1). As is discussed in Gestalt therapy [5], understanding of
the internal process helps to find the way to enhance the effect of the treatment
depending on individual stages and preferences of target subjects.

2.2 Eye Tracking System


Various experimental instruments such as cameras, microphones, accelerometers and
physiological sensors are used to detect the user’s response, on their kinematics of eye
motions, biological signals such as ElectroEncephaloGram (EEG). Firstly, we
investigated a human visual recognition process by focusing on subconscious
attentions and used the eye tracking system, Binocular ViewPoint PC-60 Scene
Camera [4], which consists of two eye cameras and illuminator systems mounted to
the EyeFrame hardware with the dual input PCI digitizer card for real-time data
analysis. In this specification, the visual range arc covers horizontally ±44°and
vertically ±20° of visual arcs. The scanning rate is a range in 60Hz-30Hz depending
on the setting. The system mainly provides positions of the gaze (x, y) and the pupil
size by the height and width.

3 Task Design to Investigate Visual Attention


In considering a medical effect of the Horticultural Therapy, the experimental design
should be addressed comprehensively, yet this paper tackles with the problem
preliminarily by focusing on a specific aspect. It is known that the visual attention
represents a degree of motivation with respect to interests from subjects in the
subconscious level. Kaspar & König [5] reported a dynamical change of visual
attentions in multiple tendencies by using various types of pictures, such as images of
nature, urban, computer-generated fractal and pink-noise. They hypothesized that
fixation duration increases while saccade frequency and distance decreases when
human subjects gaze the same image repetitively, and then they demonstrated the
existence of such a tendency except in the pink-noise image.

Fig. 2. The experimental setup. We used the screen with 52inch TV monitor. The distance
between the screen and eyes and the height of the screen center are individually 120 and
140cm, which is adjusted to the standing posture (left). The combination of stimulus image S*
and the blank image are given randomly (right). A blank image is interposed between stimulus
images.

According to their interpretations of supported subjects’ interviews after the


experiment, subjects catch a global view at first and then observe the details after.
However, their experimental data exhibits that the image of nature tends to make a
192 G. Ai et al.

resistance to decrease saccade distance, suggesting a large enhancement of actions to


catch the whole structure in a sense of seeing the wholeness of nature.
We simply hypothesized that eye gaze movements have a similar tendency when
watching a simple artificial drawing and a single entity or object in the natural view,
while the whole natural view generates excitement for watching a collective
relationship of composing elements and those details, and we expect such a tendency
can be observed by using an index of the frequency and range in eye movements. Fig.
2 illustrates the experimental setup. Visual stimuli are presented in a large screen with
respect to the eyesight, with the instruction of free viewing inside the screen. The
presentation order is given randomly by choosing one from a set of prepared images
and the same random order is used for all subjects in this experiment.

4 Experiment
We designed a spontaneous observation task with simple diagrams as artificial picture
and natural scenes with salient flowers located in the center. The surrounding area of
the screen was completely covered by a white paper for prevention of disruption in
concentration to stimuli. For comparison between effects of artificial and natural
images, we designed two types which consist of four tasks: presentations either
triangle or square by drawing with a thin line (1: ADN) and think line (2: ADK) as
shown in Fig. 3, and presentations either salient flowers without any background
(single condition, denoted ‘S’), salient flowers with selected surrounding flowers
(multiple condition, denoted ‘M’) or the original natural scene accompanied with the
background (larger one, denoted ‘L’) by using a white flower (3: NFW) and a yellow
flower (4: NFY) as shown in Fig. 4. Experiments of AND, ADK, NFW and NFY are
done according to the stimulus presentation schedule as shown in Fig. 2 (bottom
right). Since individual images are randomly chosen ten times, and the total time was
about 5mins (2types × 10times × 10s + 19times × 5s=295s) in AND and ADK and about
7.5mins (3types × 10times × 10s + 29times × 5s=445s) in NFW and NFY. The
instruction is simply given as information of how many times of presentation will be
done and request of free viewing except seeing outside of the screen frame.

(ADN-Tr) (ADN-Sq) (ADK-Tr) (ADK-Sq)


Fig. 3. Images for the spontaneous observation task of artificial pictures. Simple diagrams with
thin line (ADN) and with thick line (ADK) are used.

We used simple diagrams of triangle (Tr) and square (Sq) as artificial picture, which
is expected to evaluate the eye gaze movement. In consideration of the legibility, thin
(8px) and thick (40px) line were used with respect to the full screen size 1920 × 1200
(115.5 × 65cm). In the process of preparing pictures as natural scenes, we used a natural
scene with salient flowers located in the center, and graded three types of S, M, L as
A Structure of Recognition for Natural and Artificial Scenes 193

explained above. As a normalization of natural images to evaluate consistently, two


mirror reflected images were concatenated in the center (Fig. 4).

4.1 Experimental Results


We examined four tasks with five subjects. In artificial conditions with ADN and
ADK, two typical gaze patterns were observed. One is the tendency of gazing on
corners of presented diagram and the other is a simple fixation at the screen center
(Fig. 5). In natural conditions of NFW and NFY, gaze positions tend to distribute over
the presentation range of image, depending on sizes of S, M and L (Fig. 5).

(NFW-S) (NFW-M) (NFW-L)

(NFY-S) (NFY-M) (NFY-L)

Fig. 4. Images for the spontaneous observation task of natural pictures. We selected two images
of natural scenes, NFW-L and NFY-L as originals, and removed the background except two
types of flowers such as central flowers and surrounding flowers as image M. As a single
flower image, salient flowers were only extracted to make images S.

Fig. 5. Example of all eye movement positions of specific subjects in the ADK task obtained by
the superimposed plot depending on stimulus conditions of triangle(left), square(middle) and
blank image(right). Top three panels were from Subject K34 and the bottom panels were from
K51 (See, Fig. 9).
194 G. Ai et al.

In profiles obtained by the y-axis summation of gaze positions, multiple peaks


appear in positions of edges of triangle and square shapes (Fig. 7; left and middle). In
contrast, the blank image provides a single peak in the center (Fig. 7; right). In the
analysis of results from natural scenes as NFW and NFY tasks, profiles of the single
flower (S) condition are similar to results of ADN and ADK, while the original image
(L) has a similar profile to the blank image, having a single peak in the center (Fig. 8).
Interestingly, multiple flower (M) conditions provided a unique profile.
In the analysis of time profiles of eye movements depending on ten sessions of
square conditions in the ADK task, the spatial area of eye movements each session
tend to decrease monotonically, yet this tendency is not simply obtained from profiles
of individual subjects. A common property is decreasing of the eye movement size
from session 1 to session 2, which is consistent with the result of Kaspar & König [5].
Since the value of 0.3 represents a movement along the square shape (Fig. 9c; session
2), the value close to zero means a focusing at the corner (Fig. 9c; session 4).

Fig. 6. All eye movement positions of subject K34 in the NFW and NFY task depending on
stimulus conditions: the single flower(S), multiple(M), original(L) and blank image(rightmost)

Fig. 7. Profiles of eye movements depending on stimulus conditions of triangle (left), square
(middle) and blank image (right) obtained by the summation of gaze positions according to y-
axis of Fig. 6. Thus, y-axis indicates y coordinate of the gaze position, which is consistent with
Fig. 6 and x-axis indicates the normalized histogram that counts the gaze position each 0.02 in
[0, 1] in y-axis of Fig. 6 and the value is divided by the total count of individual subjects.
Therefore the thick line represents the average of five subjects and vertical thin lines represents
standard deviations (SD) of the subjects.

Therefore, Fig. 9b shows that three subjects keep watching multiple corners during
a single session for 10s (K1, 43 and 22), a subject fluctuates two modes either
watching of multiple corners or a single corner (K34) and a subject stays its fixation
point at the center throughout the task (K51). In natural cases of NFW and NFY tasks,
A Structure of Recognition for Natural and Artificial Scenes 195

the average plot in the S condition has a decrease tendency, which is consistent with
the artificial cases of ADK-Sq and AND-Tr profiles. On the other hand, average plots
in the M condition have an increase tendency (data is not shown). In addition, average
plots in the L condition tend to be flat similar to the blank case.

Fig. 8. Profiles of eye movements depending on stimulus conditions: the single flower(S),
multiple(M), original(L) and blank image(rightmost), which was obtained by the summation of
gaze positions according to y-axis. Top four panels were from results of the NFW task and the
bottom panels were from results of the NFY task.

(a) (b)

(c)
Fig. 9. Time profiles of eye movements depending on ten sessions of square conditions in the
ADK task, which was obtained by the size of eye movements each session. The average and SD
(a) and individual profiles with respect to subject IDs (b). Actual eye movements in session 1,
2, 3 and 4 of subject K34 (c).
196 G. Ai et al.

5 Conclusion

In this paper, we investigated different tendencies of visual attentions between natural


scenes and artificial pictures and analyzed dynamical changes of the gaze movements
representing spontaneous searching actions traveling salient features in space, which
is necessary to recognize the whole structure of the target image, involving the
subject’s subconscious interests.
We initially hypothesized the original natural scene is more attractive than
reconstruct pictures from parts of features in image. However the original scene has a
similar tendency with the blank image, while salient flowers with surrounding flowers
(M condition) seem to be attractive for many subjects, exhibiting by frequent
movements of the gaze position equally distributed over the image area. This may
indicate that an existence of the appropriate point between the ‘simple’ and ‘too
complex’ to see [7], reflecting a sense of beauty in artificial arrangement of natural
plants as a representation of a compact nature, suggesting an mental effect of HT. In
the future work, we will need to proceed an integrative experiment accompanied with
active motions to make an arrangement of flowers, which means an organization of
the whole view through action-perception cycle, modifying the structure by interests,
preferences and intentions. In this case, the observation digs seriously into the effect
of the HT. The communications with therapists of HT is known to be important for
interfere of action-perception cycle in patients and enhance further intentions to
interact with the external environment.

Acknowledgements. The authors would like to thank Dr. Naoyuki Sato who kindly
offered us his expertise and necessary devices for this experimental measurement.
This research has been supported by DBSE Brain-IS Research Project in Kyushu
Institute of Technology and partly supported by JSPS 24650107.

References
1. Hothersall, D.: History of Psychology, 4th edn. McGraw-Hill, New York (2004)
2. The National Coalition of Creative Arts Therapies Associations,
http://www.nccata.org/
3. The American Horticultural Therapy Association, http://ahta.org/
4. Page, M.: Gardening as a therapeutic intervention in mental health. Nursing Times 104(45),
28–30 (2008)
5. Nevis, E.: Introduction. In: Nevis, E. (ed.) Gestalt Therapy: Perspectives and Applications,
p. 3. Gestalt Press, Cambridge (2000)
6. Kaspar, K., König, P.: Overt Attention and Context Factors: The Impact of Repeated
Presentations, Image Type, and Individual Motivation. PLoS ONE 6(7), e21719 (2011)
7. Taylor, R.P., Spehar, B., Van Donkelaar, P., Hagerhall, C.M.: Perceptual and Physiological
Responses to Jackson Pollock’s Fractals. Front. Hum. Neurosci. 5, 60 (2011)
A Study on Fashion Coordinates Based on Clothes
Impressions

Moe Yamamoto and Takehisa Onisawa

Graduate School of Systems and Information Engineering, University of Tsukuba


Tsukuba, Ibaraki, Japan
myamamoto@fhuman.esys.tsukuba.ac.jp,
onisawa@iit.esys.tsukuba.ac.jp

Abstract. This paper proposes the fashion coordinates generation system re-
flecting impressions expressed by an image word. For the construction of the
coordinates system, three items are discussed. The first one is the analysis of
impressions of clothes in order to get knowledge of fashion coordinates.
Through the pre-experiments, two impression factors are extracted and an im-
pressions space consisting of two factors axes is constructed. The evaluation
experiments are performed for the evaluation of clothes samples selected based
on the impressions space. The second one is the analysis of impressions of the
combinations of outerwear and a shirt. In order to obtain knowledge on the rela-
tion between combination and impressions, four types of combinations are con-
sidered based on the impressions space, and the evaluation experiments are
performed for evaluation of the combinations. The last one is to propose the
generation method of initial coordinates candidates. The evaluation experiments
are performed for the evaluation of the method, and the results show that the
coordinate method is applicable.

Keywords: Fashion coordinate, Impressions space.

1 Introduction

It is said that clothes have some roles; protection of a human body, discrimination of
an organization as a company uniform, or a school uniform, expression of sociality
and/or age, i.e., expression of trend, self-expression, formal dress as a matter of cour-
tesy, etc. Although we usually make fashion coordinates in combination of some cho-
sen clothes considering weather of the day, a visiting place, a frame of our mind of
the day, we often worry about whether chosen clothes are appropriate for a visiting
place or whether we make fashion coordinates fitting to our own impressions. There-
fore, we are apt to make usual fashion coordinates rather than trying to make new
fashion coordinates by choosing various clothes, and consequently we make similar
fashion coordinates every day [1].
Recently, many coordinates sites appear, which make virtual coordinates by the
combination of various clothes [2-3]. Although fashion coordinates using these tools
are pleasant, it is difficult to make fashion coordinates reflecting impressions of

Y.S. Kim et al. (eds.), Advanced Intelligent Systems, 197


Advances in Intelligent Systems and Computing 268,
DOI: 10.1007/978-3-319-05500-8_19, © Springer International Publishing Switzerland 2014
198 M. Yamamoto and T. Onisawa

clothes using these tools because these tools do not consider impressions of clothes. It
is also difficult to use these tools without knowledge on the combination of clothes.
There are studies on selection support systems of clothes such as color scheme of
clothes [4], automatic fashion coordination using the time-series model [5]. However,
even these systems have the problems: Any impressions of clothes cannot be dealt
with or shapes and colors of clothes are chosen from clothes samples prepared befo-
rehand. Having these problems, fashion coordinates with any desired impressions are
impossible, and the system cannot have any fashion coordinates.
Evolutionary design systems such as clothes design or color schemes design are
proposed [6-7], applying evolutionary computation. Users can have creative design by
giving evaluation to presented design candidates in the evolutionary design systems.
However, the evaluation is often performed using a numerical evaluation value based
on user’s preference. Then, impressions of design candidates cannot be dealt with
directly by this approach.
This paper considers a fashion coordinates system based on impressions of clothes
expressed by various adjectives, which designs various clothes and/or coordinates
various fashions. In this paper, for the construction of the system, especially, impres-
sions of fashion coordinates, i.e., impressions of the combination of outerwear and a
shirt, are analyzed. Furthermore, a part of the fashion coordinates system that gene-
rates the combination of outerwear and a shirt as initial fashion coordinates candidates
is constructed based on knowledge obtained by the analysis, and the evaluation of
initial fashion coordinates candidates is performed. Although the proposed system has
the modification part in which presented fashion coordinates candidates are evaluated
and the candidates are modified according to user’s evaluation and this modification
procedures are repeated until a user is satisfied with the fashion coordinates candi-
dates, in this paper the discussion of the modification part is omitted.
The aim of this paper is to analyze impressions of fashion coordinates, i.e., impres-
sions of the combination of outerwear and a shirt for an inputted adjective and to ob-
tain knowledge on the generation of initial fashion coordinates candidates before the
modification.
The organization of the paper is as follows. Section 2 describes the outline of the
system and clothes data used in this paper. Section 3 describes the construction of the
coordinates generation part in the fashion coordinates system and the construction of
impressions space of clothes. Section 4 describes the experiments of evaluation of
coordinates impressions. Section 5 describes the design method of initial coordinates
candidates and evaluation experiments of initial coordinates candidates impressions.
Final section shows the conclusions of this paper.

2 Fashion Coordinates Generation System

Fig. 1 shows the outline of the fashion coordinates generation system. The system
consists of the impressions estimation part, the sample selection part, the fashion
coordinates generation part, and the modification part. Furthermore, the system has
clothes samples DB. A user inputs an adjective expressing impressions of desired
A Study on Fashion Coordinates Based on Clothes Impressions 199

fashion coordinates, which is called an image word. The system estimates the impres-
sions value for an inputted image word at the impressions estimation part. The system
chooses some pieces of outerwear and a shirt according to estimated impressions
value at the sample selection part. The system makes fashion coordinates using cho-
sen outerwear and a shirt, and presents 10 fashion coordinates candidates to a user at
the fashion coordinates generation part. A user evaluates these 10 presented candi-
dates and the system modifies these candidates according to user’s evaluation. These
procedures, evaluation, modification and coordinates generation, are repeated until a
user is satisfied with presented candidates. However, this paper discusses only the
part of the system shown by red lines in Fig. 1, called the initial coordinates genera-
tion part, and omits the modification part.

Processes Built in this Paper

USER SYSTEM
Impressions
Input
Estimation
Impressions
Image Word Space Clothes
Sample Selection Database

Coordinates
Generation
Evaluation
Modification

Completion

Fig. 1. Outline of coordinates generation system

2.1 Clothes

Table 1 shows 12 types of clothes, i.e., outerwear and a shirt, used in this paper,
which are for men in their twenties. The combination of outerwear and a shirt is
called fashion coordinates in this paper.

Table 1. Types of clothes

SHIRT
1 T-shirt, turtleneck shirt
2 sleeveless shirt
3 polo shirt
4 shirt, colored shirt
OUTER WEAR
5 knit sweater 9 jacket
6 knit cardigan 10 vest
7 knit vest 11 nylon jacket
8 parka 12 down jacket
200 M. Yamamoto and T. Onisawa

2.2 Clothes Parameters

Clothes, outerwear and a shirt, are composed of six parts, a body part, a hem part, a
sleeve part, a cuff part, a collar part, a button and pocket part, as shown in Fig. 2.
The parts excluding the button and pocket part have parameters as shown in Table 2,
which are chosen referring to [8]. Various clothes, i.e., various shapes, colors and
patterns, are designed by changing these parameters values, where these parameters
values are controlled so that unlikely clothes are not designed.

Body Collar

Hem Cuff
Cloth Data

Button and
Sleeve Pocket

Fig. 2. Clothes parts

Table 2. Clothes parameters

Parameters Parameters Parameters


1 Cloth Type 16 Hem length 28 Cuff length
2 Open / Close 17 color (hue) 29 color (hue)
3 Color Type 18 color (saturation) 30 color (saturation)
4 Body depth of neckline 19 color (value) 31 color (value)
5 width of neckline 20 pattern 32 pattern
6 shoulder width 21 Sleeve length 33 Collar type
7 chest width 22 width 34 collar width
8 chest height 23 direction of curve 35 color (hue)
9 girth of bottom 24 color (hue) 36 color (saturation)
10 length 25 color (saturation) 37 color (value)
11 direction of curve 26 color (value) 38 pattern
12 color (hue) 27 pattern 39 Pocket, Button
13 color (saturation)
14 color (value)
15 pattern
A Study on Fashion Coordinates Based on Clothes Impressions 201

3 Construction of Initial Coordinates Generation Part

3.1 Construction of Impressions Space

In order to construct an impressions space, the following pre-experiments are per-


formed. Clothes samples are generated by setting parameters values of clothes at ran-
dom. The number of clothes samples is 330. Ten subjects, graduate or undergraduate
students, evaluate impressions of presented 66 clothes samples using 30 pairs of ad-
jectives shown in Table 3, which are chosen referring to [9-10]. The pairs of adjec-
tives are evaluated by Semantic Differential method (SD method) with a 5-points
scale as shown in Fig. 3.
Data obtained by the pre-experiments are analyzed by factor analysis. Table 4
shows two impressions factors obtained by factor analysis. Considering the meaning
of adjectives included in each factor, the one is called active factor and the other is
called cleanliness factor. Each factor is expressed by the axis with scale [-1.0, +1.0]
so that adjectives expressing positive impressions are on the positive side and those
expressing negative impressions are on the negative side. Then, the impressions space
with these axes is constructed.

Table 3. Pairs of adjectives

Pairs of Adjectives Pairs of Adjectives Pairs of Adjectives


1 cheerful - depressed 11 warm - cool 21 elegant - indecent
2 vigorous - vigorless 12 stylish - frumpy 22 gorgeous - simple
3 aged - young 13 diligent - idle 23 easy to move - hard to move
4 balmy - gloomy 14 masculine - androgynous 24 amiable - uncompanionable
5 formal - casual 15 old - new 25 heavy - light
6 flashy - conservative 16 spruce - disheveled 26 vivid - dull
7 fancy - cheap 17 clean - dirty 27 intelligent - not intelligent
8 passive - active 18 introvert - extrovert 28 individual - innocuous
9 pesky - plain 19 straitlaced - cheesy 29 carnivore - herbivore
10 dowdy - smart 20 infantile - adult-like 30 dashing - cold

A little A little
Neutral
Cheerful cheerful depressed Depressed

-2 -1 0 1 2

Fig. 3. Five-points scale of cheerful - depressed


202 M. Yamamoto and T. Onisawa

Table 4. Impressions factors of clothes

First Factor Second Factor


Impression Factor Pairs of Adjectives
Loading Loading
First Factor passive - active 0.888 -0.112
(Active factor) conservative - flashy 0.883 -0.024
vigorless - vigorous 0.835 -0.232
introvert - extrovert 0.815 -0.183
straitlaced - cheesy 0.813 0.232
innocuous - individual 0.813 0.128
herbivore - carnivore 0.739 -0.082
Second Factor dirty - clean -0.073 0.767
(Cleanliness factor) disheveled - spruce 0.199 0.74
dowdy - smart -0.176 0.725

3.2 Selection of Clothes Samples by Image Word

Outline of Selection of Clothes Samples


Fig. 4 shows the outline of the selection of clothes samples by an image word. At the
impressions estimation part, an inputted image word is estimated. At the clothes sam-
ples selection part, clothes samples, of which impressions are similar to impressions
of an image word, are chosen as clothes reflecting well impressions expressed by an
inputted image word.

METHOD of
USER
SAMPLE SELECTION
Image
Word Impressions
Input
Estimation
Calculate
Impressions Value

Clothes
Database

Calculate
Sample Selection
Difference

Coordinates Generation

Fig. 4. Outline of selection of clothes samples by image word

Impression Estimation of Image Word


The image word impressions value on the active factor axis and that on the cleanli-
ness factor axis are estimated using the concept of the co-occurrence [11-12]. The
followings are the explanation taking the active factor as an example.
A Study on Fashion Coordinates Based on Clothes Impressions 203

Let an image word be , and let the i-th adjective pair included in the active fac-
tor be and ( 1,2, … ,7 ). Two co-occurrences and (
1,2, … ,7) are defined as follows.

and OR and
and OR and

Co-occurrence of adjectives is searched online and the number of web pages hav-
ing co-occurrence of adjectives is counted.
Let the number of web pages having co-occurrence and be and
( 1,2, … ,7) respectively. The degree of the impressions similarity between
and , and that between and are defined as follows, respectively.

1.6 , (1)

1.6 . (2)

The impression values of image word for the adjective pairs and
are calculated by the following expression.

2.0 1.0 . (3)

The image word impressions value for the active factor is obtained by

, (4)

where is the maximum value of ( 1,2, … ,7) and is the second max-
imum value of ( 1,2, … ,7) . This means that adjectives with the largest and the
second largest number of hits by web search are chosen for the image word impres-
sions estimation. The image word impressions value for cleanliness factor is eva-
luated in the same way. The image word impressions values, for the active factor
and for the cleanliness factor, are expressed as the coordinate values ( , ) in
the impressions space.

Clothes Samples Database


In clothes samples database 330 clothes samples used in the pre-experiments in Sec-
tion 3.1 are stored with the sample number index and the sample impressions value
that is defined as the coordinate values in the impressions space. The i-th clothes
sample ( 1,2, … ,330) has coordinate values ( , ) in the impressions space
defined by the following expressions.

, (5)


, (6)
204 M. Yamamoto and T. Onisawa

where ( 1,2, … ,7) and ( 1,2,3) are the mean values of 5-point scale
evaluation values of the j-th pair of adjectives belonging to active factor and cleanli-
ness factor among subjects in the pre-experiments, respectively.

Selection from Clothes Sample Database


The difference of impressions between an inputted image word and the i-th clothes
sample is defined by expression (7).

. (7)

The small difference means the high similarity. Then, clothes samples with high
similarity degree, i.e., the small difference, are chosen and presented to a user.

3.3 Evaluation Experiments for Clothes Impressions


The experiments are performed in order to confirm whether presented clothes samples
reflect impressions expressed by image words. Eight kinds of image words, light,
flashy, simple, solemn, depressed, frumpy, pesky, and embarrassed are used in the
experiments, and Fig. 5 shows their coordinate values on the impressions space esti-
mated by the concept of the co-occurrence of adjectives. Five clothes samples every
one image word are evaluated by 14 subjects, graduate or undergraduate students,
with a 7-points scale as shown in Fig. 6. This scale means whether the subjects feel or
not that clothes samples reflect impressions expressed by an image word.

Cleanliness
1 Factor +

simple
0.5

Active
solemn light Active
flashy
Factor - Factor +
0
-1 -0.5 0 0.5 1
depressed
frumpy pesky
-0.5 embarrassed

-1 Cleanliness
Factor -

Fig. 5. Position of 8 image words in impressions space


A Study on Fashion Coordinates Based on Clothes Impressions 205

Unfit Neutral Fit


A little unfit
Very unfit A little fit Very fit

-3 -2 -1 0 1 1 3

Fig. 6. Seven-points scale

3.4 Experimental Results

Fig. 7 shows the averages of subjects’ evaluation values and their 95% of confidence
interval estimations of the population mean values of subjects’ evaluation values for
image words. Although some image words have small mean values, the lower bounds
of the interval are positive for all image words. It is found that subjects have affirma-
tive evaluation and that they feel that presented clothes samples reflect impressions
expressed by image words.

3
Average of Evaluation Value

2.5

1.5

0.5

Fig. 7. Average of evaluation value and 95% of confidence interval

4 Evaluation of Coordinates Impressions

4.1 Coordinates Samples

The following four types of the combinations of outerwear and a shirt are considered
as coordinates samples.

1. Combination A: This combination has the following outerwear and a shirt: Impres-
sions expressed by an image word are the same as impressions of outerwear and a
shirt.
2. Combination B: This combination has the following outerwear and a shirt: Impres-
sions values of outerwear and those of a shirt are in the same quadrant of the
impressions space as the one in which impressions values of an image word are.
206 M. Yamamoto and T. Onisawa

3. Combination C: This combination has the following outerwear and a shirt: Impres-
sions values of outerwear are in the same quadrant of the impressions space as the
one in which impressions values of an image word are and impressions values of a
shirt are in the next quadrant of the impressions space in which impressions values
of an image word are. Or its reverse is also used as combination C.
4. Combination D: This combination has the following outerwear and a shirt: Impres-
sions values of outerwear are in the opposite quadrant of the impressions space as
the one in which impressions values of an image word are and impressions values
of a shirt are in the same quadrant of the impressions space as the one in which im-
pressions values of an image word are.

Twelve coordinates samples are generated for each combination. That is, the total
number of generated coordinates samples are 48. Table 5 shows the list of combina-
tions used in the experiments.

Table 5. Combination types

Outerwear Quadrant Shirt Quadrant Outerwear Quadrant Shirt Quadrant


flashy 1 flashy 1 solemn 2 flashy 1
Combination solemn 2 solemn 2 flashy 1 solemn 2
A frumpy 3 frumpy 3 frumpy 3 solemn 2
pesky 4 pesky 4 Combination solemn 2 frumpy 3
light 1 flashy 1 C pesky 4 frumpy 3
flashy 1 light 1 frumpy 3 pesky 4
simple 2 solemn 2 flashy 1 pesky 4
Combination solemn 2 simple 2 pesky 4 flashy 1
B depressed 3 frumpy 3 frumpy 3 flashy 1
frumpy 3 depressed 3 Combination pesky 4 solemn 2
embarassed 4 pesky 4 D flashy 1 frumpy 3
pesky 4 embarrassed 4 solemn 2 pesky 4

4.2 Evaluation Experiments for Coordinates Impressions


In order to analyze the relationship between impressions expressed by an image word
and coordinates impressions, the evaluation experiments are performed. Fourteen
subjects, graduate or undergraduate students, evaluate impressions of randomly pre-
sented coordinates samples with the same 7-points scale as the one used in Section
3.2. Each subject evaluates 48 types of coordinates samples.

4.3 Experimental Results

1) Combination A
Fig. 8 shows the averages of subjects’ evaluation values and their 95% of confidence
interval estimations of the population mean values of subjects’ evaluation values for
four image words expressing impressions of outerwear and a shirt. It is found that if
outerwear is combined with a shirt having the same impressions as the ones of outer-
wear, the impressions are reflected as fashion coordinates.
A Study on Fashion Coordinates Based on Clothes Impressions 207

Average of Evaluation Value


2.5

1.5

0.5

0
flashy + flashy solemn + solemn frumpy + frumpy pesky + pesky

Fig. 8. Evaluation results of combination A

2) Combination B
Fig. 9 shows the averages of subjects’ evaluation values and their 95% of confidence
interval estimations of the population mean values of subjects’ evaluation values for
the combination of outerwear with impressions simple and a shirt with impressions
solemn, and the combination of outerwear with impressions frumpy and a shirt with
impressions depressed. It is found that the former coordinates give impressions so-
lemn and simple but that they do not give impressions pesky or embarrassed whose
coordinate values are in the opposite quadrant of the impressions space to those of
solemn or simple as shown in Fig. 5. It is also found that the latter coordinates give
impressions depressed and frumpy but that they do not give impressions flashy or
light whose coordinate values are in the opposite quadrant of the impressions space to
those of depressed or frumpy as shown in Fig. 5. The same results are obtained for
other coordinates in combination B.
It can be said that the combination of outerwear and a shirt with similar impres-
sions, which means that the coordinate values of outerwear and a shirt are in the same
quadrant of the impressions space, gives similar impressions as coordinates but that
the combination does not give impressions whose coordinate values are in the
opposite quadrant of the impressions space to the ones of outerwear and a shirt
impressions.

3
flashy
Average of Evaluation Value

2
light
1 solemn

0 simple
frumpy
-1
depressed
-2
pesky
-3 embarrassed
simple + solemn frumpy + depressed

Fig. 9. Evaluation results of combination B / the combination of simple outerwear and solemn
shirt, the combination of frumpy outerwear and depressed shirt
208 M. Yamamoto and T. Onisawa

3) Combination C
Fig. 10 shows the averages of subjects’ evaluation values and their 95% of confidence
interval estimations of the population mean values of subjects’ evaluation values for
the combination of outerwear with impressions pesky and a shirt with impressions
frumpy, and the combination of outerwear with impressions frumpy and a shirt with
impressions pesky. It is found that the former coordinates give impressions pesky and
flashy but that they do not give impressions simple or solemn whose coordinate values
are in the opposite quadrant of the impressions space to those of pesky. It is also found
that the latter coordinates give impressions frumpy and depressed but that they do not
give impressions flashy or light whose coordinate values are in the opposite quadrant
of the impressions space to those of frumpy. The same results are obtained for other
coordinates in combination C.
It can be said that in combination C, impressions of outerwear are reflected well as
coordinates impressions even if coordinate values of a shirt are in the next quadrant of
the impressions space to coordinate values of outerwear chosen by an image word. On
the other hand impressions of a shirt are not necessarily reflected as coordinates im-
pressions if coordinate values of outerwear are in the next quadrant of the impressions
space to coordinate values of a shirt chosen by an image word. This is why a shirt is
hidden by outerwear.

3
flashy
Average of Evaluation Value

2
light
1 solemn

0 simple
frumpy
-1
depressed
-2
pesky
-3 embarrassed
pesky + frumpy frumpy + pesky

Fig. 10. Evaluation results of combination C / the combination of pesky outer and frumpy
inner, the combination of frumpy outerwear and pesky shirt

4) Combination D
Fig. 11 shows the averages of subjects’ evaluation values and their 95% of confidence
interval estimations of the population mean values of subjects’ evaluation values for
the combination of outerwear with impressions frumpy and a shirt with impressions
flashy, and the combination of outerwear with impressions pesky and a shirt with
impressions solemn. It is found that average values are small as a whole and that it is
difficult to find out general knowledge on impressions as coordinates. The same
results are obtained for other coordinates in combination D.
A Study on Fashion Coordinates Based on Clothes Impressions 209

3
Average of Evaluation Value flashy
2
light
1 solemn

0 simple
frumpy
-1
depressed
-2 pesky
-3 embarrassed
frumpy + flashy pesky + solemn

Fig. 11. Evaluation results of combination D / the combination of frumpy outer and flashy
inner, the combination of pesky outerwear and solemn shirt

4.4 Remarks

In combination A, coordinates impressions are reflected well by outerwear and a shirt


chosen based on an inputted image word because impressions expressed by an image
word are the same as impressions of outerwear and a shirt. In combination B, the
combination of outerwear and a shirt gives similar impressions as coordinates as the
one expressed by an inputted image word because outerwear and a shirt have similar
impressions each other. In combination C, impressions of outerwear are reflected as
coordinates impressions even if coordinate values of a shirt are in the next quadrant of
the impressions space to coordinate values of outerwear. On the other hand combina-
tion D is difficult to reflect impressions of an image word by the combination of out-
erwear and a shirt. Therefore, the above three types of combinations A, B, and C are
usable for fashion coordinates design using an image word.

5 Initial Coordinates Candidates Design

5.1 Design Method

The following design method of initial coordinates candidates is proposed from


knowledge obtained in Section 4. Since it is found in Section 4 that impressions of
outerwear has great influence on coordinates impressions, outerwear is chosen up to
10 in the ascending order of difference between outerwear impressions and impres-
sions expressed by an inputted image word. Furthermore, in order to generate various
initial coordinates candidates, a shirt is chosen at random up to 10, where if coordi-
nate values of shirt impressions are in the opposite quadrant of the impressions space
to the ones of outerwear impressions, the shirt is not chosen.
210 M. Yamamoto and T. Onisawa

5.2 Evaluation Experiments for Initial Coordinates Candidates

The evaluation experiments are performed in order to evaluate initial coordinates


candidates generated by the method in Section 5.1. Eleven subjects, graduate or un-
dergraduate students, input an image word and 10 initial coordinates candidates are
generated. The subjects evaluate impressions of presented candidates with a 7-points
scale whether they feel impressions expressed by an image word for the presented
coordinates. The subjects repeat four sets of evaluation in the experiments, where the
evaluation procedure through the input of an image word to the evaluation of 10 ini-
tial candidates is called a set of evaluation.

5.3 Experimental Results and Remarks

Fig. 12 shows the rate of each evaluation value for all initial coordinates candidates
by the subjects. It is found that the rate of positive evaluation (1, 2 or 3) is more than
50%. This means that subjects feel that some initial coordinates generated by the me-
thod in Section 5.1 reflect impressions expressed by image words and that the coordi-
nates are usable as the input to the modification part in the coordinates generation
system shown in Fig.1.

-3
Positive value -2
-1
0
1
2
3
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

Fig. 12. Rate of evaluation value for all initial coordinates candidates by subjects

6 Conclusions

This paper proposes the fashion coordinates generation system reflecting impressions
expressed by an image word. For the construction of the system the following three
items are discussed in this paper. The one is the analysis of impressions of outerwear
and a shirt. In order to design fashion coordinates reflecting impressions expressed by
an image word, the proposed system has the impressions estimation part considering
the impressions space. For the construction of the impressions space, evaluation expe-
riments are performed using many samples of outerwear and a shirt. After factor analy-
sis of the data obtained in the experiments, two impressions factors, active factor and
cleanliness factor, are obtained, and the impressions space consisting of two factors
A Study on Fashion Coordinates Based on Clothes Impressions 211

axes are constructed. Samples selection methods are proposed based on the impres-
sions space and the evaluation experiments are performed for outerwear and a shirt.
The second one is the analysis of impressions of the combination of outerwear and
a shirt as coordinates. In order to obtain knowledge on the combination of outerwear
and a shirt reflecting impressions expressed by an image word, evaluation experi-
ments are performed for 4 types of combinations using the impressions space.
Experimental results show that impressions of outerwear have strong influence on
impressions of coordinates, and that three of four combinations are usable for the
combination of outerwear and a shirt.
The last one is about the generation method of initial coordinates candidates. The
coordinates method is as follows: Outerwear is chosen up to 10 in the ascending order
of difference between outerwear impressions and impressions expressed by an input-
ted image word, and in order to generate various initial coordinates candidates a shirt
is chosen at random up to 10, where if coordinate values of shirt impressions are
in the opposite quadrant of the impressions space to the ones of outerwear impres-
sions, the shirt is not chosen. The evaluation experiments are performed in order
to evaluate the proposed method and the results show that more than 50% of initial
coordinates candidates can get affirmative evaluation.
The followings are future problems. The proposed system is implemented and
evaluation experiments of fashion coordinates design is also performed.

References
1. Sato, A., Watanabe, K., Yasumura, M.: suGATALOG: A Fashion Coordinate System Us-
ing User’s Clothes Worn Pictures. Transactions of Information Processing Society of Ja-
pan 53(4), 1277–1284 (2012)
2. LyLy, Fashion Coordinate Dress-up Simulation “ecloth”, http://www.ecloth.jp/
3. HONYY Entertainment, Inc., Social fashion site “FUKULOG”,
http://fukulog.jp/
4. Fujibayashi, T., Tokumaru, M., Muranaka, N., Imanishi, S.: Virtual Stylist Project -The
color coordination support system with consideration to the color harmony. Technical Re-
port of IEICE 102(534), 7–12 (2002)
5. Kosugi, S., Akabane, T., Kimura, S., Unagami, T., Arai, M.: A Method to Create Fashion
Coordinates using Kansei and Time-Series Information. Forum on Information Technolo-
gy 7(3), 467–468 (2008)
6. Sugahara, M., Miki, M., Hiroyasu, T.: Design of Japanese Kimono using an Interactive
Genetic Algorithm. IEEE Transactions on Systems, Man and Cybernetics (SMC 2008),
185–190 (October 12-15, 2008)
7. Rodriguez, L., Diago, L., Hagiwara, I.: Interactive Genetic Algorithm with fitness model-
ing for the development of a color simulation system based on customer’s preference. Ja-
pan Journal of Industrial and Applied Mathematics 28(1), 27–42 (2011)
8. Ogata, Y., Onisawa, T.: Interactive Clothes Design Support System. In: Ishikawa, M.,
Doya, K., Miyamoto, H., Yamakawa, T. (eds.) ICONIP 2007, Part II. LNCS, vol. 4985,
pp. 657–665. Springer, Heidelberg (2008)
9. Shoyama, S., Urakawa, R., Kouda, M.: Influence of Shirt Colors of Job Interview Suits in
Impression Formation. Japanese Society for the Science of Design 50(6), 87–94 (2004)
212 M. Yamamoto and T. Onisawa

10. Tamura, K., Muroi, R.: The Measurement of High School Girl’s Image for Kimono and
Yukata. Journal of Textile Engineering 57(3), 89–94 (2011)
11. Shimizu, K., Hagiwara, M.: Image Estimation of Words Based on Adjective Co-
occurrences. The Transactions of the Institute of Electronics, Information and Communi-
cation Engineers D J89-D(11), 2483–2490 (2006)
12. Yamazaki, M., Ishizuka, K., Onisawa, T.: Combination Analysis of Motion and Melody in
Phrase Animation. In: Proc. of the 6th International Conference on Soft Computing and In-
telligent Systems, and The 13th International Symposium on Advanced Intelligent Sys-
tems, pp. 861–866 (2012)
Author Index

Ai, Guangyi 189 Nakajima, Hiroshi 35, 95, 109


Arima, Tadahiro 179
Onisawa, Takehisa 197
Cabacas, Regin 1 Orii, Hideaki 147
Chuang, Chen-Chia 17, 25
Ra, In-Ho 1
Egawa, Tadahito 51
Sasano, Yuji 85
Fujimoto, Tatsuhiro 95 Sato, Hideaki 157
Fujisawa, Tetsuya 51 Shoji, Kenta 189
Song, Hwachang 13
Hata, Hideki 65
Su, Shun-Feng 17, 25
Hata, Yutaka 35, 51, 65, 77, 95, 109, 125,
137
Tajima, Fumiaki 157
Hsiao, Chih-Ching 17, 25
Takahashi, Kazuyoshi 85
Imawaki, Setsurou 137 Takase, Haruhiko 85
Imawaki, Seturo 65, 125 Takeda, Takahiro 109
Ishikawa, Tomomoto 125, 137 Tanaka, Junichi 35
Tang, Yijiang 179
Jeng, Jin-Tsong 17 Taniguchi, Kazuhiko 51
Tao, C.W. 17
Kaku, Yusho 77 Tsuchiya, Naoki 35, 95, 109
Kang, Jin-Shig 167 Tsukuda, Koki 125
Kawanaka, Hiroharu 85 Tsunoda, Yuriko 147
Kawano, Hideaki 147 Tsuruoka, Shinji 85
Kikuchi, Sho 77
Kim, SoonWhan 167 Wada, Chikamune 179
Kobashi, Syoji 51, 65, 77 Wagatsuma, Hiroaki 189
Kuki, Masato 35 Wang, Yufeng 1
Kuramoto, Kei 65, 77
Yagi, Naomi 137
Maeda, Hiroshi 147 Yamamoto, Koji 85
Matsuda, Nobuo 157 Yamamoto, Moe 197
Miyatake, Naoki 157 Yang, Meng-Cheng 17
Moribe, Masayuki 147 Yasukawa, Midori 189

You might also like