Professional Documents
Culture Documents
IJNCAA Vol6No1
IJNCAA Vol6No1
I
NTERNATI
ONALJOURNALOF
New Computer
Architectures and
thei
r Applications
Vol
ume6,Issue1
2016
www.
sdi
wc.
net
International Journal of
NEW COMPUTER ARCHITECTURES AND THEIR APPLICATIONS
Editor-in-Chief Overview
The SDIWC International Journal of New Computer Architectures and
Their Applications (IJNCAA) is a refereed online journal designed to
Maytham Safar, Kuwait University, Kuwait address the following topics: new computer architectures, digital
Rohaya Latip, University Putra Malaysia, Malaysia resources, and mobile devices, including cell phones. In our opinion, cell
phones in their current state are really computers, and the gap between
these devices and the capabilities of the computers will soon disappear.
Editorial Board
Original unpublished manuscripts are solicited in the areas such as
computer architectures, parallel and distributed systems,
Ali Dehghan Tanha, University of Salford, United Kingdom microprocessors and microsystems, storage management,
Ali Sher, American University of Ras Al Khaimah, UAE
communications management, reliability, and VLSI.
Altaf Mukati, Bahria University, Pakistan
Andre Leon S. Gradvohl, State University of Campinas, Brazil
Azizah Abd Manaf, Universiti Teknologi Malaysia, Malaysia One of the most important aims of this journal is to increase the usage
Carl D. Latino, Oklahoma State University, United States and impact of knowledge as well as increasing the visibility and ease of
Duc T. Pham, University of Birmingham, United Kingdom use of scientific materials, IJNCAA does NOT CHARGE authors for any
Durga Prasad Sharma, University of Rajasthan, India publication fee for online publishing of their materials in the journal and
E.George Dharma Prakash Raj, Bharathidasan University, India does NOT CHARGE readers or their institutions for accessing the
Elboukhari Mohamed, University Mohamed First, Morocco published materials.
Eric Atwell, University of Leeds, United Kingdom
Eyas El-Qawasmeh, King Saud University, Saudi Arabia
Ezendu Ariwa, London Metropolitan University, United Kingdom Publisher
Fouzi Harrag, UFAS University, Algeria The Society of Digital Information and Wireless Communications
Genge Bela, University of Targu Mures, Romania Miramar Tower, 132 Nathan Road, Tsim Sha Tsui, Kowloon, Hong Kong
Guo Bin, Institute Telecom & Management SudParis, France
Hocine Cherifi, Universite de Bourgogne, France
Isamu Shioya, Hosei University, Japan Further Information
Jacek Stando, Technical University of Lodz, Poland Website: http://sdiwc.net/ijncaa, Email: ijncaa@sdiwc.net,
Jan Platos, VSB-Technical University of Ostrava, Czech Republic Tel.: (202)-657-4603 - Inside USA; 001(202)-657-4603 - Outside USA.
Jose Filho, University of Grenoble, France
Juan Martinez, Gran Mariscal de Ayacucho University, Venezuela
Khaled A. Mahdi, Kuwait University, Kuwait Permissions
Kayhan Ghafoor, University of Koya, Iraq International Journal of New Computer Architectures and their
Ladislav Burita, University of Defence, Czech Republic Applications (IJNCAA) is an open access journal which means that all
Lotfi Bouzguenda, University of Sfax, Tunisia content is freely available without charge to the user or his/her
Maitham Safar, Kuwait University, Kuwait institution. Users are allowed to read, download, copy, distribute, print,
Majid Haghparast, Islamic Azad University, Shahre-Rey Branch, Iran search, or link to the full texts of the articles in this journal without
Martin J. Dudziak, Stratford University, USA asking prior permission from the publisher or the author. This is in
Mirel Cosulschi, University of Craiova, Romania
accordance with the BOAI definition of open access.
Monica Vladoiu, PG University of Ploiesti, Romania
Mohammed Allam, Naif Arab Universitgy for Security Sciences, SA
Nan Zhang, George Washington University, USA Disclaimer
Noraziah Ahmad, Universiti Malaysia Pahang, Malaysia Statements of fact and opinion in the articles in the International Journal
Pasquale De Meo, University of Applied Sciences of Porto, Italy of New Computer Architectures and their Applications (IJNCAA) are those
Paulino Leite da Silva, ISCAP-IPP University, Portugal of the respective authors and contributors and not of the International
Piet Kommers, University of Twente, The Netherlands Journal of New Computer Architectures and their Applications (IJNCAA)
Radhamani Govindaraju, Damodaran College of Science, India or The Society of Digital Information and Wireless Communications
Talib Mohammad, Bahir Dar University, Ethiopia
(SDIWC). Neither The Society of Digital Information and Wireless
Tutut Herawan, University Malaysia Pahang, Malaysia
Velayutham Pavanasam, Adhiparasakthi Engineering College, India Communications nor International Journal of New Computer
Viacheslav Wolfengagen, JurInfoR-MSU Institute, Russia Architectures and their Applications (IJNCAA) make any representation,
Waralak V. Siricharoen, University of the Thai Chamber of Commerce, Thailand express or implied, in respect of the accuracy of the material in this
Wojciech Zabierowski, Technical University of Lodz, Poland journal and cannot accept any legal responsibility or liability as to the
Yoshiro Imai, Kagawa University, Japan errors or omissions that may be made. The reader should make his/her
Zanifa Omary, Dublin Institute of Technology, Ireland own evaluation as to the appropriateness or otherwise of any
Zuqing Zhu, University of Science and Technology of China, China
experimental technique described.
CONTENTS
ORIGINAL ARTICLES
Abstract: Cloud computing becomes a powerful trend excessive heat; this often results in system
in the development of ICT services. It allows dynamic unreliability and performance degradation. It has
resource scaling from infinite resource pool for been reported in previous studies (e.g.,[3-5]) that
supporting Cloud users. Such scenario leads to system overheating causes system freeze and
necessity of larger size of computing infrastructure and frequent system failures. According to the authors
increases processing power. Demand on the cloud in [4] the highest energy cost of data centers are
computing is continually growth that makes it changes
used to maintain the running servers. The servers
to scope of green cloud computing. It aims to reduce
energy consumption in Cloud computing while
need to be available and accessible throughout the
maintaining a better performance. However, there is year even though nobody is accessing them. Such
lack of performance metric that analyzing trade-off situation incurs high cost in electric and cooling
between energy consumption and performance. systems.
Considering high volume of mixed users’ requirements In order to sustain with good service reputation,
and diversity of services offered; an appropriate the data centers needed to facilitate the processing
performance model for achieving better balance and storage requirements for 24/7. The
between Cloud performance and energy consumption consequence of high resource availability in Cloud
is needed. In this work, we focus on green Cloud
is not merely to financial expense but worse to
Computing through scheduling optimization model.
Specifically, we investigate a relationship between environment through carbon footprint. There are
performance metrics that chosen in scheduling more greenhouse gas (GHG) released in the
approaches with energy consumption for energy atmosphere that leads to global warming, acid rain
efficiency. Through such relationship, we develop an and smog [6]. Although there have been many
energy-based performance model that provides a clear research efforts to reduce the energy consumption
picture on parameter selection in scheduling for in computing operation, there is still lack of a
effective energy management. We believed that better
decision support system for choosing the right
understanding on how to model the scheduling
performance will lead to green Cloud computing. performance metric in dynamic computing
environment. The large-scale distributed system
like Cloud needs to support large number of users
Keywords: green cloud; scheduling; energy with diverse processing requirements. Meanwhile,
efficiency; optimization model; energy management. the performance degradation at any stage of
processing is unacceptable. It is a challenge to find
1. Introduction the best trade-off between system performance
and energy consumption in Cloud.
Cloud computing is a state-of-the-art technology In task scheduling approach it mainly aims to
where it provides computing services i.e., IaaS, maximize or at least sustain the system
PaaS and SaaS to users [1, 2]. Basically, Cloud performance. Such scheduling approach able to
providers e.g., Amazon, Microsoft and Google promote better energy consumption by effectively
provide their users with resource sharing model scheduling users’ requirements based on resource
where resources (e.g., processors, storage) can be availability [7-10]. In the conventional scheduling
added and released easily either they are needed or optimization model, the performance parameters
otherwise. Particularly, large computing and are usually specified as deterministic. However, in
storage infrastructure like Cloud needs more realistic energy management systems, many
energy to generate sufficient electricity and parameters are dealing with uncertain natures with
cooling systems thus more expenses need to be multiple states and features. When it comes to
invested. Furthermore, computer systems not only green Cloud computing, several issues are need to
consuming vast amount of power also emit
1
International Journal of New Computer Architectures and their Applications (IJNCAA) 6(1): 1-8
The Society of Digital Information and Wireless Communications, 2016 ISSN 2220-9085 (Online); ISSN 2412-3587 (Print)
take into account, for example, security, accuracy performance model for energy efficiency is still an
and energy. In respect of energy saving, there are open issue. Some researchers focused on resource
few aspects that needed to be highlighted such as state in their energy models. According to the
power management, virtualization, and cooling. authors in [12], processors consumed
Energy efficiency for Cloud computing involved approximately 32Watt when they are operated in
several series of interaction and interdependent idle mode that compared to storages merely used
between relevant system components, processes 6Watt. In peak processing mode the energy
and metrics. Hence, it is needed a clear guideline consumption of processors can be boosted up to
for assessing performance and energy efficacy of more than 80Watt to 95Watt [17]. Energy efficient
Cloud especially in its scheduling and scheduling that proposed in [8] dynamically
provisioning activities. allocated users’ tasks into processor to achieve
In this paper we focus on efficient energy better performance and minimize energy
consumption that influenced by parameter consumption. The system performance and energy
selection in the scheduling approach. We develop consumption in their work is been measured
optimization model for energy-based scheduling throughout the task execution either during peak
that aims to provide a guideline for analyzing the or idle state, then total energy consumption is
better trade-off between system performance and recorded.
energy consumption in Cloud. We then investigate There are also some works that calculated the
the relationship of performance metrics that energy consumption through their modern
chosen in the scheduling approach with energy scheduling approaches. Power-efficient scheduling
in [7] assigned set of virtual machines (VMs) to
consumption to facilitate green computing. It is
physical machines (PMs) for data centers
wise to effectively manage energy consumption in
management. They used the consolidation fitness
Cloud computing; this would in turn be very
to determine the right VMs to replace PMs when
beneficial cost of computation. existing PM is been switched off. However there
The reminder of this paper is organized as is challenge to determine the most right VM due
follows. A review of related work is presented in to unpredictable changes in the system workload.
Section 2. In Section 3 we describe the parameter Power-aware mechanism in [18] is based on
categories that chosen for efficient energy priority scheduling for efficient energy
management. Section 4 details our performance management. They adopted various heuristic for
model for energy efficiency. Experimental setting energy-aware scheduling algorithms that
and results are presented in Section 5. Finally, employed multi-objective function for diverse
conclusions are made in Section 6. efficiency-performance tradeoffs. Their
scheduling algorithms consist of three steps (i.e.,
job clustering, re-evaluating to give better
2. Literature Review scheduling alternatives and selecting the best
The energy management has inspired many schedule). They conducted the experiments in
researchers [3, 4, 11-15] to focus on green Cloud homogeneous environment that disguises Cloud
computing. The scope of energy assessment for computing environment. The existing researches
modeling its performance should be stretched that proposed energy efficient scheduling are able
further incorporating parameter selection and to reach appropriate balance between system
energy model that being used. In [16], the authors performance and energy consumption. However,
emphasized resource utilization in their energy the dynamicity and heterogeneity on their
model. They utilized the virtualization as a core computing environment are limited to some
technology to contribute green IT because it able extent.
to share limited resources with varies workloads. There are some researchers (e.g.,[4, 6, 19]) that
The quantity of physical machines (PMs) can be highlighted the incorporation of low-energy
reduced where processing is performed by virtual computing nodes in heterogeneous distributed
machines (VMs). VMs are very flexible that act as systems and able to achieve energy efficiency.
independent servers that maximized resources Due to the scheduling approaches are subjected to
utilization while achieving energy efficiency. system environment and scale, it required
However, the selection of parameters in their effective performance model for better evaluation
2
International Journal of New Computer Architectures and their Applications (IJNCAA) 6(1): 1-8
The Society of Digital Information and Wireless Communications, 2016 ISSN 2220-9085 (Online); ISSN 2412-3587 (Print)
on both performance and energy usage. However, response time is designed to monitor and manage
there is lack of exclusive parameter selection queuing problems in scheduling. The major
strategy for energy management in order to problem in scheduling is to allocate various users’
balance between system performance and energy jobs to be mapped and executed by the right
consumption. In Table I, we summarized the resources. Due to the scheduler needs to get
selection of performance metrics and system information from users and resources, the
behaviors that used in those existing scheduling scheduling decisions are most of the time
approaches to design energy model. consumed more processing energy compared than
storage. The resources need to be available to
perform the job execution at most of the time.
Table 1. Relationship between Scheduling Rule and Therefore, in order to improve the execution time
Energy Model for minimizing energy consumption we need to
No. Scheduling Rule Energy Model take into account the optimization technique in
1 Heuristic approach It considers designing scheduling approach.
used for pre- busy and idle
scheduling processing. states
Furthermore, the issue on suitable queue length
of
Such predictable rule processing comes into a picture when the total waiting time of
satisfies energy saving elements. jobs leads for high energy consumption. Note that
where it consumed the determination of queue size is significantly
less processing time.
related to the scale of computing system.
2 Threshold for It considers
processor, memory, resource Specifically, the queue size contributes for better
disk and utilization and job waiting time that it relates to buffer
communication link in overhead in the management. The suitable size of queue needs to
resource allocation. system.
If the threshold chosen
be identified to reduce data access time in the
is too low it might buffer. We do not want to power the entire
reduces resource memory module in lengthy time only for data
utilization while accessing operation. Hence, the effective
setting too high
threshold leads to
scheduling approach for dynamic environment
communication required to define the (most) suitable queue length
bottleneck. for reducing power consumption.
3 Migrating or Virtualization
remapping strategy in leads to energy
scheduling using a set efficiency.
of virtual machines 3.2 Utilization
(VMs). There are scheduling approaches that calculate
the energy consumption based on computing
resources’ busy and idle states. In particular, the
3. Classification of Scheduling Parameters resources that have high busy time means the
The existing scheduling approaches (e.g.,[3, 4, system utilization is improved. Meanwhile the
7, 20]) analyzed the performance of their energy system is considered has low utilization when
there are many resources that idle in a given
management system through various scheduling
observation time. However, from energy
algorithms. They have chosen several types of
parameters such as time-based perspective, management perspective, high utilization may
utilization and overhead. leads to large energy consumption. The resource
that has high utilization certainly consumed high
processing energy in order to complete the task
3.1. Time-based Metrics execution. It is more crucial in the idle state of
Time-based metric is one of the most popular resources. There is huge percentage of power
parameters that chosen for focusing green Cloud. consumption (i.e., for electricity and cooling
systems) to facilitate the running resources. The
Such parameter used to measure the effectiveness
of scheduling decision for uncertain, dynamic and idle resources need to be available even though
large-scale environment. The time-based metrics, there is no processing happen in their space.
for example, execution time, waiting time and Resource utilization only 53% of total energy
3
International Journal of New Computer Architectures and their Applications (IJNCAA) 6(1): 1-8
The Society of Digital Information and Wireless Communications, 2016 ISSN 2220-9085 (Online); ISSN 2412-3587 (Print)
consumed in data centers [21]. Therefore, the best e.g., heuristic and game-theory might improve the
solution is to effectively manage resource system utilization but energy efficiency. It is
utilization for energy efficiency. There is a big because energy management not entirely
role for the scheduler to monitor resource dependent on the selected scheduling approaches.
utilization in dynamic environment. Furthermore, the system goal generally aims to
increase the system performance while
minimizing the energy consumption. These
3.3. Overhead maximize and minimize objectives need to be
The storage and processing resources in Cloud carefully designed in order to avoid nastiest
must be highly available that it reflects Quality of unbalance outcome.
Service (QoS). It included the ability of Cloud to
adapt with unexpected failures, e.g., storage Identify What is the current
Recognize current problem?
overloaded, traffic congestion and performance problem
Optimization Stages
fluctuation. Such scenarios needed extra time for
the Cloud providers to solve and fix without notify Formulate
How are the metrics
Integrate performance
the users. Some strategies used replication of metric and create interrelated?
objects and services, and using redundant formula
processing and communication mechanisms to
Modify What are the results?
solve the unexpected failure. In order to Evaluate and revise if Is there any changes
implement these strategies they need more than necessary needed?
one communication paths that used for
disseminating the same information, and several Figure 1. Energy-based Performance Model
processing elements for processing the same
action. In scheduling approach, we need to design
In our optimization model, the identifying stage
extra procedure or policy to manage such
focuses on analyzing current energy-performance
unexpected failure and implement reserve
problem in the system. Initially, there is required
strategy. It explicitly incurred extra
to thoroughly understand the relationship between
communication and processing overheads in the
the system performance and energy consumption
systems. For the sake of energy efficiency,
in the current computing system. Such review is
overhead should be minimized while maintaining
important to determine a pattern on how the
system performance. Hence, tunable parameters in
energy consumption crosses or touches the system
experimental setting are significant to thoroughly
performance during the execution process. The
identify the system behavior/action in order to
investigation on energy consumption in large-
achieving the target results (better trade-off).
scale data center can be analyzed through its
operational infrastructure. Such infrastructure can
be classified based on power usage for physical
4. Optimization Stages for Energy Efficiency
equipment and processing condition. It leads to
There are various performance models that two different measurements of energy efficiency
adopted in scheduling approach for energy are; power usage effectiveness (PUE) and data
efficiency (e.g., [3, 4, 7, 9, 15]). In this work, we center effectiveness (DCE) [16]. For PUE
specifically divide our optimization model for measurement, it concerns on total power used for
energy-based scheduling into three stages; (i) IT equipment such as server, routers and cabling.
identifying, (ii) formulating, and (iii) modifying. Meanwhile DCE is calculated through the
resource management strategy (i.e., scheduling,
4.1 Identifying Stage load balancing and security) that been applied in
There are several energy issues such as energy the server room or data center for complying the
waste, inaccurate energy measurement etc. that users’ requirements. Both PUE and DCE are inter-
identified by existing researchers (e.g. [6, 14, 22]). related to each other for supporting computing
The percentage of energy consumption in task operations.
scheduling is normally differs from one scheduler Note that the high performance scheduler
to another as there are other factors that contribute consumes high processing power at its peak
to such amount [16]. Some scheduling approaches processing time. This means to meet the users’
4
International Journal of New Computer Architectures and their Applications (IJNCAA) 6(1): 1-8
The Society of Digital Information and Wireless Communications, 2016 ISSN 2220-9085 (Online); ISSN 2412-3587 (Print)
requirements for task scheduling purposes, hence that based on system environment and its scale. In
increases the energy consumption exponentially. this stage, we gather relative performance metrics
At other scenario, there is IT equipment in data to create formula for assessing energy efficiency.
center that used electricity to make them available Such criteria can be defined according to several
24/7 to facilitate processing purposes. Even levels that represent their complexity in
though Cloud utilizes the virtualization communication and processing activities.
technology, it still relies on the physical In this work, we highlight three level of system
computing equipment at its end support. For complexities; (i) homogenous/heterogeneous, (ii)
monitoring energy consumption from IT static/dynamic and (iii)
equipment perspectives, the energy distribution local/centralized/distributed. High in complexity
must be accurately measured. It is due to the usage means huge amount of energy consumption that
of those equipment contributes to electricity bill consumed for task scheduling decisions. For
that basically relates to operational and example, the system is considered consumed large
maintenance costs. For example, the server rooms amount of processing energy when there is a huge
or data centers needed of mechanical and number of incoming tasks to be scheduled. Also,
electrical (M&E) infrastructure, also ventilating or the data center is needed a large number of
cooling infrastructure for supporting the processing elements (PE) to offer high availability
operational in the room. Nowadays, the actual cost in processing. In some cases, the small number of
for managing the IT equipment either in server PE is yet consumed large percentage of energy. It
room or laboratory has become big issue in happens when the PE is operated for 24/7 and it
organization. needed very cold room to control the heat releases.
The performance model then should be Note that excessive heat emitted by these PE
focusing on how to utilize the usage of IT causes notable power consumption for cooling
equipment hence the energy distribution can be them. In addition, the energy consumption is
optimally consumed. Several strategies such as proportional to the users’ requirements. The
scheduling, load balancing and authorization are resources operated in extensive processing time if
needed to be highlighted in order to optimally there is huge number of workload in the system.
manage and utilize the energy distribution. In this Such situation leads to increase the energy usage
work, we proposed task scheduling strategy that in the whole system operation.
acts as mediator in order to monitor system In response to the system and user criteria,
performance and processing power in a same selection of performance metric to achieve better
time. The scheduler is designed to be capable for trade-off between performance and energy
analyzing the effectiveness of the scheduling consumption is a huge challenge. The users in
processes by identifying which scheduling pattern large-scale computing system normally demand
leads to energy waste. In priori, the scheduler for varies of processing requirements. Hence, the
should be expected to produce a scheduling system performance needed to fulfill the users’
pattern that gives better performance without requirements in order to sustain the system’s
enlarge energy consumed. reputation. This is very important in Cloud
because there are many Cloud providers that
competing each other to provide better
4.2 Formulating Stage performance. Therefore, the performance metrics
Cloud computing enables its services (i.e., IaaS, to evaluate energy consumption should be
PaaS, SaaS) for various users at anywhere for fragmented of a total/average of the system
anytime. It required to provide reliable in both performance. The system performance might be
computation and communication activities. In reduced because its portion needs to embrace the
order to sustain the performance, thorough energy consumed.
investigation of resource management specifically In response to the collection of performance
task scheduling is required. Note that the system metrics, we then raise issue on how to integrate
performances are much influenced by the feature each metric for energy efficiency. Basically, the
of system and characteristics of the users. metrics aim for better scheduling decisions that
Therefore, it is important to formulate the leads to improve the system performance. In
performance parameters in assessment method response to energy efficiency, the design of
5
International Journal of New Computer Architectures and their Applications (IJNCAA) 6(1): 1-8
The Society of Digital Information and Wireless Communications, 2016 ISSN 2220-9085 (Online); ISSN 2412-3587 (Print)
scheduling approach should able to monitor both chosen based on a given evaluation scope (Table
performance and energy consumption for a given 2). From Table 2, it is merely that item number 2
time duration. It is wised to capture energy and 3 are related to scheduling approach where it
consumption for a specific time duration that can can be formulated to maintain the system
be based on incoming workload while calculating performance while monitoring the energy
the average of the execution time. It is because; by consumed. Due to the large scale constitution of
frequently measuring the energy consumption it Cloud environments, such parameter selection is
implicitly increased a percentage of power usage. considered to be unbounded and still an open
The right proportional of the system performance issue.
e.g., total execution time to measure the energy
consumption must be significantly concerned. 4.3 Modifying Stage
For instances, the performance metric in Cloud Energy-based scheduling approach aims to
data center is measured by processing overhead as schedule users’ tasks to be run on the computer
follows. systems that consumed low energy consumption.
It means that the computer system can tune the
∑ ∑ processing states to adapt with system changes
( ) while reducing the system energy consumption. In
(1) response to this, the computer system needs to
collect run-time information of applications,
The total energy consumption in the system can be monitor the processing energy consumption states,
calculated as given: notify about states changes and compute energy-
based scheduling decisions. Through the
monitoring process, the system administrator is
∑ able to identify the processing activities that
consumed large amount of energy. Then, it can
(2) perform modification in the scheduling policy in
order to balance between the system performance
; assumed that the energy consumption is and energy consumption. Such modification
measured through simulation program. In such procedure might be involves several alteration
example, the total number of incoming task is techniques (i.e., fading, chaining, looping etc.).
recorded within the scheduling process that used The most significant step in this modification
to calculate the overhead. Meanwhile, the energy stage is trial-and-error process. Due to task
consumption is measured for entire simulation scheduling in dynamic computing system is
program. In some scheduling approaches (e.g.,[9, known as nondeterministic polynomial time (NP)
21]), they used fix power consumption (in Watt) problem, in this work we suggest to use the
that identified at busy and idle states of processing concept of generate-and-test which is one of
element for calculating energy consumption. heuristic methods in solving NP issue. The
generate-and-test process will be investigated on
Table 2. Suggestion Metrics how the systems and applications respond to the
No. Evaluation Scope Metrics processing state changes. We then adjust the
Real-time voltage and performance metrics that fit to the target goal
Hardware (better trade-off between performance and energy
1 current, processing
measurement
state/frequency. consumption).
Time-based performance, In some cases, the modifying stage implicitly
2 Data Center utilization, processing
used to track any misidentify in the previous
overhead.
Communication overhead
stages. The dynamic scheduling involved many
Mobile Cloud challenge issues such as high complexity, huge
3 (transaction delay and
Computing
traffic congestion). overhead and performance degradation. The
identification on the right system environment
with the suitable performance metrics is needed to
In this work, we also express some metric be thoroughly analyzed. The suitability in both
suggestion for energy efficiency that can be criteria for optimizing the system can be solved
6
International Journal of New Computer Architectures and their Applications (IJNCAA) 6(1): 1-8
The Society of Digital Information and Wireless Communications, 2016 ISSN 2220-9085 (Online); ISSN 2412-3587 (Print)
during modification process. For example, the reduction of more than 50% on average in energy
waiting time might be not the right performance consumption. Note that energy consumption with
metric used for calculating energy consumption in opt-scheduler exhibits better results in all
the scheduler queue for public Cloud. It is due to scheduling algorithm and indicates that the
the public Cloud is much expected by the large- differences is relatively small. In overall, the opt-
scale number of workload that coming 24/7 that scheduler able to deliver low energy consumption
results extensive waiting time. Hence, in and better processing overhead in several
modification stage, the Cloud provider can change scheduling policies. Therefore, it is significant in
the performance metric by combining both waiting aggregating three optimization stages (Section 4)
time and processing overhead in order to measure to strive for better performance and energy
the energy consumption. The Cloud provider can consumed.
design adaptive backup and maintenance activities
in scheduling approach to reduce processing opt-scheduler non-opt-scheduler
0.7
power that consumed for such activities.
Average Overhead
0.6
5. Experimental Results
0.5
7
International Journal of New Computer Architectures and their Applications (IJNCAA) 6(1): 1-8
The Society of Digital Information and Wireless Communications, 2016 ISSN 2220-9085 (Online); ISSN 2412-3587 (Print)
performance model for optimizing the task [9] K. Li, "Energy efficient scheduling of parallel tasks
scheduling where the energy efficiency becomes on multiprocessor computers," Journal of
Supercomputing, pp. 1-25, 2010.
the next goal. Note that, the (near) optimal [10] N. Kim, J. Cho, and E. Seo, "Energy-credit scheduler:
scheduling for energy efficiency is still an open An energy-aware virtual machine scheduler for cloud
issue. Hence, there is a lot of potential for more systems," Future Generation Computer Systems, vol.
research on its performance model and design. 32, pp. 128-137, 2014.
Optimistically, Cloud able to achieve better trade- [11] N. B. Rizvandi and A. Y. Zomaya, "A Primarily
Survey on Energy Efficiency in Cloud and Distributed
off between performance and energy consumption Computing Systems," arXiv preprint 2012.
when there is clear guideline for designing the [12] S. U. Khan and C. Ardil, "Energy Efficient Resource
energy-based performance model. Allocation in Distributed Computing System," World
Academy of Science, Engineering and Technology, pp.
667-673, 2009.
References [13] W. X and W. Y, " Energy-efficient Multi-task
Scheduling Based on MapReduce for Cloud
[1] C. Wu and R. Buyya, Cloud Data Centers and Cost Computing," Seventh Int. Conf. Comput. Intell. Secur.,
Modeling: A complete guide to planning, designing and 2011.
building a cloud data center,: Morgan Kaufmann, [14] D. Wei, L. Fangming, J. Hai, L. Bo, and L. Dan,
2015. "Harnessing renewable energy in cloud datacenters:
[2] D. Rountree and I. Castrillo, The Basics of Cloud opportunities and challenges," Network, IEEE, vol. 28,
Computing : Understanding the Fundamentals of pp. 48-55, 2014.
Cloud Computing in Theory and Practise. . MA USA: [15] Kim, Nakku, J. Cho, and E. Seo, "Energy-credit
Elsevier, 2014. scheduler: an energy-aware virtual machine scheduler
[3] M. Hussin, Y. C. Lee, and A. Y. Zomaya, "Efficient for cloud systems.," Future Generation Computer
Energy Management using Adaptive Reinforcement Systems, vol. 32, pp. 128-137.
Learning-based Scheduling in Large-Scale Distributed [16] A. Beloglazov, R. Buyya, Y. C. Lee, and A. Y.
Systems," presented at the 40th Int'l Conf. on Parallel Zomaya, "A Taxonomy and Survey of Energy-Efficient
Processing (ICPP2011), Taipei, Taiwan, 2011. Data Centers and Cloud Computing Systems,"
[4] Y. C. Lee and A. Y. Zomaya, "Energy efficient Advances In Computers, vol. 82, pp. 47-111, 2011.
utilization of resources in cloud computing systems," [17] L. A. Barroso and U. Holzle, "The Case for Energy-
Journal of Supercomputing, pp. 1-13, 2010. Proportional Computing," Journal Computer, vol. 40,
[5] N. B. Rizvandi and A. Y. Zomaya, "A Primarily pp. 33-37, 2007.
Survey on Energy Efficiency in Cloud and Distributed [18] A. Aziz and H. El-Rewini, "Power efficient scheduling
Computing Systems :," ArXiv preprint heuristics for energy conservation in computational
arXiv:1104.5553, 2012. grids.," J. Supercomputing, vol. 57, pp. 65-80, 2011.
[6] Z. Abbasi, M. Jonas, A. Banerjee, S. Gupta, and G. [19] J. G. Koomey, Estimating total power consumption by
Varsamopoulos, Evolutionary Green Computing servers in the US and the world vol. 15: February,
Solutions for Distributed Cyber Physical Systems vol. 2007.
432: Springer, 2013. [20] C.-K. Tham and T. Luo, "Sensing-Driven Energy
[7] M. Sharifi, H. Salimi, and N. M., " Power-efficient Purchasing in Smart Grid Cyber-Physical System,"
distributed scheduling of virtual machines using IEEE Transaction on Systems, Man and Cybernetics:
workload-aware consolidation techniques.," J. Systems, vol. 43, pp. 773-784, 2013.
Supercomputing, vol. 61, pp. 46-66, 2011. [21] L. Minas and B. Ellison, "InfoQ:The Problem of Power
[8] M. Hussin, Y. C. Lee, and A. Y. Zomaya, "Priority- Consumption in Servers," in Energy Efficiency for
Based Scheduling for Large-Scale Distribute Systems Information Technology, ed: Intel PRESS, 2009.
with Energy Awareness," in Proc. of the 2011 IEEE [22] P. Schreier. (Dec 2008/Jan 2009) An Energy Crisis in
Ninth International Conference on Dependable, HPC. Scientific Computing World : HPC Projects
Autonomic and Secure Computing, Sydney, Australia,
2011, pp. 503-509.
8
International Journal of New Computer Architectures and their Applications (IJNCAA) 6(1): 9-15
The Society of Digital Information and Wireless Communications, 2016 ISSN 2220-9085 (Online); ISSN 2412-3587 (Print)
ABSTRACT 1 INTRODUCTION
9
International Journal of New Computer Architectures and their Applications (IJNCAA) 6(1): 9-15
The Society of Digital Information and Wireless Communications, 2016 ISSN 2220-9085 (Online); ISSN 2412-3587 (Print)
face and EEG to estimate emotion. This The process of noise filters design requires
research consider implementation of BCI for some situation and application of specific
communication and control, where estimation knowledge from neurology. After some
of emotional state supported by facial analysis experiments for finding out the pattern of the
is combined with estimation of different signals was prepared and calibrated noise
mental tasks execution. filter.
Another problem that has to be solved is
2 SIGNAL PROCESSING AND connected with clustering of neurons, where it
CLUSTERING OF NEURONS is necessary to divide 80-120 billion brain-
neurons into few clusters. There is no exact
BCI system is realized on the following steps: answer on the question on what basis we
when something has to be done a thought is should divide the neurons and the problem can
developed into the brain which leads to be solved only experimentally. For solving
development of a neuron potential pattern. these clustering problems is involved Artificial
Reading brain by register electrophysiological Intelligence and Artificial neural network [23].
signals the developed potential pattern is There exists several alternatives for pattern
transformed into an analyzable signal patterns. analysis and recognition [24], but Artificial
These signal patterns developed by BCI Intelligence and Artificial Neural Network
equipment and their spectrum is analyzed provides provide very effective and useful
using various pattern analysis techniques. algorithms for pattern recognition [25].
After recognition of human’s intention about
the task that brain wants to get from smart 3 EXPERIMENTAL METHODS AND
device or computer we can determine proper MEASUREMENTS
command (or sequence of commands) to
execute the required task. Feedback to the user The experimental bran-computer interface
can be realized in various feedback-forms e.g. setup for communication with smart devices
device switching on/off, emergency call, was worked out at the University of
video, audio etc. Telecommunications and Post at Sofia.
As is seen, for BCI communication with smart The experimental BCI system includes 3D
devices it is necessary to provide filtering of camera Panasonic HDC-Z0000, sender
register brain signals and pattern analysis Spectrum DX9 DSMX, Sony GoPRO – GoPro
techniques for clustering of neurons and HERO3, Nikon D902D smart TV Samsung
pattern recognition. UE-65HU8500 + LG60LA620S, ACER K11
At any moment the human brain generates Led projector, Linksys EA6900 AC1900 smart
wave for a particular thought, but at the same router, Pololu Zumo Shield, 8 core/32GB
time generates also some waves corresponding RAM/4TB HDD/3GB VGA computer for
to other unnecessary thoughts. These video processing that translate EEG signals
additional waves appear because sometimes it into computer commands.
is not possible to concentrate fully on a Two Electro-Caps (elastic electrode caps) was
particular thought and they act as noise for the used to record each from positions C3, C4, P3,
original waves. For handling with these noise- P4, O1, and O2, defined by the most popular
waves the user have to increase his “10-20 System of electrode placement” at
concentration during the BCI process. experimental setup.
For solving this problem it is necessary to The implemented standard system of electrode
develop noise filtering mechanism that can placement defines a grid of most appropriate
detect the unrelated spectrum and filter them places for brain signals measurements on the
out from the useful spectrum [22]. human head.
10
International Journal of New Computer Architectures and their Applications (IJNCAA) 6(1): 9-15
The Society of Digital Information and Wireless Communications, 2016 ISSN 2220-9085 (Online); ISSN 2412-3587 (Print)
11
International Journal of New Computer Architectures and their Applications (IJNCAA) 6(1): 9-15
The Society of Digital Information and Wireless Communications, 2016 ISSN 2220-9085 (Online); ISSN 2412-3587 (Print)
Table 1. Classification accuracies with Bayesian vector from statistical and time-frequency
Network classifiers for five mental tasks features, and the third step is classification,
Mental Tasks which consists of SVM core with probabilistic
N
Base Letter Math Count Rotate outputs.
1. 91.3% 65.3% 73.7% 69.4% 72.5% For accuracy improvement in terms of facial
muscle movement influence on system
2. 92.4% 74.8% 76.5% 79.9% 69.2%
performance at EEG emotion recognition was
3. 87.8% 79.2% 62.6% 78.6% 81.1% made the following assumptions: the muscle
4. 90.2% 70.3% 69.2% 69.4% 75.3% movement is an additive noise and its
5. 93.7% 63.5% 75.3% 76.4% 66.1% influence can be reduced by filtering. The
accuracy of the modality with better
6. 89.3% 69.7% 73.7% 75.9% 79.5%
performing classifier can be used as a reliable
M 90.8% 70.46% 71.83% 74.93% 73.9% output for the combined system output.
Prior to features extraction, the baseline drift
In this investigation was used 18-fold cross in EEG and signals has to be removed. For this
validation. The classification accuracies with purpose was used mathematical morphology to
pair-wise classifier for the same five mental get a filtered EEG signal x according to:
tasks is shown in Table 2.
( x SE) SE ( x SE) SE
Table 2. Classification accuracies with pair-wise x x (1)
classifier for five mental tasks
2
where x′ is the raw input signal, SE is a
Mental Tasks
N structuring element of type horizontal line with
Base Letter Math Count Rotate
length l = 500, denotes morphological
1 93.5% 67.1% 76.9% 69.3% 74.5%
opening and denotes morphological closing.
2 92.4% 71.6% 78.3% 79.9% 63.2% The length l meets the condition:
3 87.8% 70.8% 64.6% 78.6% 81.1%
fS
4 90.2% 69.4% 63.8% 62.4% 79.3% l , (2)
2 fb
5 93.7% 67.9% 75.1% 72.5% 65.8%
6 89.3% 61.7% 82.3% 71.6% 73.3% where fb = 0,1 Hz is the lowest frequency in
M 90.8% 68.08% 73.5% 72.38% 72.9% the spectrum of EEG signals and fs = 100 Hz is
the sampling frequency.
The classification accuracies depending on the After decomposition of the signal into separate
user fall into the interval between 62.6% and components hk[n] with defined instantaneous
81.1%. frequency:
K
5 EEG EMOTION RECOGNITION x[n] hk [n] r[n] , (3)
k 1
Developed task-oriented BCI includes EEG
emotion estimation combined with facial where K is the number of intrinsic mode
emotion analysis. functions (IMF) and r[n] is the residual. The
The technique for EEG analysis and feature decomposition is iterative procedure and it
extraction includes the following steps: first repeats until the residual becomes a monotonic
step is a preprocessing of EEG signals, the function. Since the IMFs of an EEG signal
second step is built an augmented feature represent the activity of the well-known waves
(delta, theta, alpha and beta), we have
12
International Journal of New Computer Architectures and their Applications (IJNCAA) 6(1): 9-15
The Society of Digital Information and Wireless Communications, 2016 ISSN 2220-9085 (Online); ISSN 2412-3587 (Print)
restricted the available IMFs up to 4 plus the threshold is used to avoid the possible artifacts
residual. For each kth IMF was calculated its in the signal.
Hilbert transform Hk[n] and the instantaneous The proposed model relies on a new approach
frequency: for multi-view facial expression recognition,
where was encoded appearance-based facial
f k [n]
fS
k [n] k [n] 1 (4)
features using sparse codes and learn
2 projections from non-frontal to frontal views
using linear regression projection. We then
The instantaneous frequency determined reconstruct facial features from the projected
according to (4) appeared to be unstable, so for sparse codes using a common global
the final experiments was used Savitzky– dictionary. Finally, the reconstructed features
Golay filter with polynomial order of 4 for are used for facial expression recognition.
fk[n] calculation. The window length for For detection of the frontal face and improved
computation is chosen to be 41. The IMFs are accuracy at face detection was used sub-
not perfect analytic signals and we window with the detected face through a
encountered the problem with negative values Convolutional Neural Network.
in fk[n]. An IMF can be regarded as At the process of face extraction it is necessary
combination of amplitude and narrow-band to detect and recognize a few face samples
frequency modulation. Suppressing the within the proposed detection interval.
amplitude modulation part significantly Afterwards was calculated an average feature
reduces the appearance of negative vector which is estimated in the classifier for
frequencies. In a fixed number of iterations we each window.
normalize the IMF by its Hilbert envelope and With the help of common spatial patterns
finally the amount of negative frequencies is (CSP) and linear SVM (support vector
negligible. For a particular frequency band, we machines or support vector networks) was
extract a signal m[n] as follows: classified two emotional state – sad and happy.
The received results show that Gamma band
max H k [n] f l f k [ n] f h (30–100 Hz) is the most appropriate for EEG-
m[n] (5)
0 otherwise based emotion classification.
After learning with supervised learning model
where fl and fh are the lowest and the highest and training with 6 000 epochs the
frequency in the band. The relative duration of classification process can be used in real-time
a given EEG wave activity is found according for classification, where all samples are
to: divided into two categories. The achieved
classification accuracy is 70.78%.
N
1
d
N
m[n] ,
n 1
(6) 6 CONCLUSIONS
13
International Journal of New Computer Architectures and their Applications (IJNCAA) 6(1): 9-15
The Society of Digital Information and Wireless Communications, 2016 ISSN 2220-9085 (Online); ISSN 2412-3587 (Print)
14
International Journal of New Computer Architectures and their Applications (IJNCAA) 6(1): 9-15
The Society of Digital Information and Wireless Communications, 2016 ISSN 2220-9085 (Online); ISSN 2412-3587 (Print)
Recognition and Artificial Intelligence, vol. 20, [22] J. J.Shih, D. J. Krusienski, and J. R. Wolpaw. 2012.
Oct. pp. 1 - 13. “Brain-Computer Interfaces in Medicine,” in Mayo
Clinic Proceedings, vol. 87, pp. 268–279.
[18] Mu Li and Bao-Liang Lu, “Emotion Classification
Based on Gamma-band EEG”, Proceedings of the [23] J. Wolpaw and E. W. Wolpaw, “Brain-Computer
IEEE Conf. on Engineering in Medicine and Interfaces: Principles and Practice,” Oxford
Biology, Minneapolis, 3-6 Sept. 2009, pp. 1223- University Press,. 2012.
1226.
[24] S. Kumar and M. Sharma, “BCI: Next Generation
[19] Noppadon Jatupaiboon, Setha Panngum, and Pasin for HCI,” Int. Jour. of Advanced Research in
Israsena, “Real-Time EEG-Based Happiness Computer Science and Software Engineering, vol. 2
Detection System”, Scientific World Journal, 2013; Issue 3, Mar. 2012 pp.146 -151.
2013: Article ID 618649, 12 pages
http://dx.doi.org/10.1155/2013/618649
[25] J. Millán, “A local neural classifier for the
recognition of EEG patterns associated to mental
[20] Wei-Long Zheng, Bo-Nan Dong and Bao-Liang tasks,” IEEE Transactions on Neural Networks, vol.
Lu, “Multimodal Emotion Recognition using EEG 13, 2002, pp.222 – 226.
and Eye Tracking Data”, Proc. of the IEEE EMBC
Conf. Chicago, 26-30 Aug., 2014, pp. 5040-5043.
[26] P. Poolman, R. M. Frank, P. Luu, S. Pederson, and
D. M. Tucker. 2008. “A single-trial analytic
[21] M.Schreuder, A. Riccio, M .Risetti, S. Dähne, A. framework for EEG analysis and its application to
Ramsay, J. Williamson, D. Mattia, and M. target detection and classification,” in NeuroImage,
Tangermann. 2013. “User-centered design in brain- vol. 42, pp. 787-798.
computer interfaces-a case study,” in Artificial
Intelligence in Medicine, vol. 59, pp. 71–80.
15
International Journal of New Computer Architectures and their Applications (IJNCAA) 6(1): 16-22
The Society of Digital Information and Wireless Communications, 2016 ISSN 2220-9085 (Online); ISSN 2412-3587 (Print)
Abstract – A lots of blind people suffer in their own this paper propose gloves which helps Blind
lives because of their vision loss. Vision is one of the people to depend on themselves.
five important senses in the human body. People with Blindness can be caused by physiological
Vision loss have their own disability. Many countries dysfunction, anatomical or neurological
around the world provide special assistance to these dysfunctions. Each blind person has his own
needed people to improve their life quality as good as blindness type. Many scores have been established
possible. They provide them with special equipment
to assess the extent of blindness [1]. Total
for their disability to improve their daily life like voice
message service, electronic stick that guide them while blindness is the complete absence of vision in the
moving around and other specialized equipment. This person mainly neuronal blindness because of
paper present a project idea to establish and provide defect in the NLP.
ultrasonic gloves to blind people for guiding them to Some researchers believe that blindness
their right roads without the need for other people may positively affect the audio capability in the
assistant. This Can be done through Ultrasound waves blind person, So their sense of hearing will be
that will be sent to the surrounding then it will be more accurate. So they can distinguish sounds and
collected by detector in the gloves then to be sent as musical tones. Scientifics define people with small
vibration or Audio signals to the blind’s so they can be percentages of blindness is the person who have
aware of their surroundings and they can choose their score of 200/20 at the best eye. Also the person
own road and way without other people assistant
who use eyeglasses or who have shortness of
Keywords: Ultrasonic sensor, Pit sensor, Global vision or have decrease in the visual fields less
Positioning System (GPS) ,Programmable Interface than 20 degree. Therefore, many countries are
controllers (PIC) and Lilypad Arduino . helping the development of researches and studies
seeking to improve the lives of blind people with
1. INTRODUCTION special needs to make them feel they are part of
the surrounding community through associations
Suffering From blindness is not temporary and different courses that are giving to learn the
for certain time it’s being blind the whole day the mechanism of dealing with blindness and ignore
whole time every second and every minutes. Once the inability of weakness and inability to move
the blind person wakes up from his bed in the around [1].
morning his suffering start and his daily needs
start. Blind people need more care to avoid risk of 2. PROBLEM STATEMENT
injuries and that affect people around them; people
need to be near for them to avoid being injured. Around the world, Blind people face real
People around them will be exhausted from being problems. Their major problem lies in their sense
attention to them and giving them all what they of disability and sees what around them. Also they
need. So Blind people must depend on themselves, face a problem with other people hesitate in their
service or assistance to them. In addition, many
16
International Journal of New Computer Architectures and their Applications (IJNCAA) 6(1): 16-22
The Society of Digital Information and Wireless Communications, 2016 ISSN 2220-9085 (Online); ISSN 2412-3587 (Print)
people do not know how to deal with blind person. Many of them feel embarrassed to talk about their
Blind people needs a boost of confidence in their suffering in front of others or feel upset at their
self and their ability to achieve anything and rely inability to see because of their illness or old age
on their self without the needs of others [3]. or the events that he suffered and caused him to
Many countries developed many lose sight. Some diseases that cause weakness in
mechanisms of services that contribute to make sight are listed (Fig.1), according to the World
life easier for the blind, where they developed Health Organization in 2012 for the most common
some rules and laws in the art of dealing with the reasons of blindness (except for refractive errors):
blind without notifying disability and some 1. Cataracts (47.9%),
companies and organizations are training and 2. Glaucoma (12.3%),
recruiting people with the ability to deal with 3. Macular Degeneration (8.7%),
blind. In addition, there are a lot of companies 4. Corneal opacities (5.1%),
have developed special equipment used by the 5. Diabetic retinopathy (4.8%),
blind to help them [2]. 6. Childhood blindness (3.9%),
There are a lot of emotions a blind person 7. Trachoma (3.6%)
hides behind his eyelids that we cannot see them. 8. Onchocerciasis (0.8%) [4].
17
International Journal of New Computer Architectures and their Applications (IJNCAA) 6(1): 16-22
The Society of Digital Information and Wireless Communications, 2016 ISSN 2220-9085 (Online); ISSN 2412-3587 (Print)
He has to live with his new environment, and Nguyen, Kennedy, and Cornwall demonstrate
he was horrified passers-by from tankers steam demonstrate that Color recognition audio voice is
around his home. one of the problems facing the blind specifically
Aftly white cane, he pay attention to the in defining colors of objects. The touch alone is
tanker drivers, especially at night and were keen not enough to determine the color of the body, so
to help him. And it became more pronounced the proposed technical solutions is a device
stick to them, and felt the difference is clear. Here (Bright-F), which specifies the color and depth
the White cane idea was started [1]. and then translate them into spoken words [8].
In America 1930, George Lyon developed the Balachandran, Cecelja and Ptasinski were
shape to make the lower end of the stick red color explained GPS for the blind. Perhaps the best
to be distinctive. technical developments in the field to help the
In 1931 d'Herbemont way of walking stick blind was the launch of «GPS» navigation devices
taught and learned in France. Then this event was which you can see the blind site and notified
spread in British newspapers, which helped to acoustically close to the intersection and tell him
spread the idea. about the restaurants or the nearby bus from sites.
In May 1931 BBC radio, broadcast White cane However, this technique cannot reach a high
code for each world who are, and must be used by accuracy in determining the user's location, which
all blind. After that most of the blind rehabilitation is important for the Blind, where the closer the
centers embraced the idea and learn how to walk user to place about 3040 meters [9].
out of a lot of blind people in the world [1]. Chaurasia and Kavitha's paper was about
Electronic stick. And it is an electronic rod
4. RELATED WORKS designed as a long white stick for exports are
really offering a blind frequency ultrasound feel
Chun-Ming Tsai explains Shopping Card them under his hand when he hit a snag in a
interactive. The idea of interactive card love certain way. They are able to detect obstacles in
(Love Card) is to integrate slice of identification all directions at a distance of five meters. The
using radio waves (RFID) in the electronic card made parties to this stick of lead material [10].
attached to the neck of the blind, the reaction after
this card with existing products in the market and 5. HARDWARE REQUIREMENTS
to pronounce the name of a blind product and its
price [5]. Any system, application or device requires
Hyun Ju Lee, Mun Suk Choi and Chung Sei some of available equipment to work efficiently
Rhee are considered that Braille credit card ATM and properly, so that this system requires a
cards whether credit or other are non-usable by selection equipment and taken into consideration
blind category. The proposed solutions work that every part, which will provide to the system
credit card points Braille prominent with built-in must be an occasion for the service to be useful
audio system, so that the blind case to make a and activity for the meant category [6].
purchase and when you pass the card to pay your System contains several tools , including
amount withdrawn and introduced braille Ultrasonic sensor which works as input data
pronunciation [6]. source ,vibration motor which work as output
Wang Guolu, Qiu Kaijin, Xu hai and Chen reaction ,microcontroller called Lilypad arduino
Yao explained that Phone screen for the Blind board that programming by c++ language, fabric
Can blind interaction with mobile phones panel gloves worn by the user and superconductor fibers
dedicated in Braille touch called (B-Touch). This which connect all hardware parts together [6].
phone contains the advanced capabilities, such as
the existence of a system of direct and reader of
books and a system to identify the objects, and the
touch screen is formed in Braille by the content
displayed [7].
18
International Journal of New Computer Architectures and their Applications (IJNCAA) 6(1): 16-22
The Society of Digital Information and Wireless Communications, 2016 ISSN 2220-9085 (Online); ISSN 2412-3587 (Print)
19
International Journal of New Computer Architectures and their Applications (IJNCAA) 6(1): 16-22
The Society of Digital Information and Wireless Communications, 2016 ISSN 2220-9085 (Online); ISSN 2412-3587 (Print)
6. IMPLEMENTATION
20
International Journal of New Computer Architectures and their Applications (IJNCAA) 6(1): 16-22
The Society of Digital Information and Wireless Communications, 2016 ISSN 2220-9085 (Online); ISSN 2412-3587 (Print)
As shown in figure 8, the proposed device is properly. As mentioned earlier, the circles sensor
divided into two main parts: gives information about objects surroundings and
Ultrasonic sensor (input device) the environment around the user and alerting
Vibration motor (output device) through vibration, where it made about the
accuracy of 1 meter, from the user’s real
Ultrasonic sensor exports inputs data about surrounding areas.
objects distance around it to a Lilypad Arduino to The overall result of the experiment gloves,
process this data then commanding vibration came as follows:
motors to give output in form of vibration in i. The highest response was whenever it is
accordance with programming of Lilypad closer to the object.
Arduino. ii. Middle respond was found in middle
As shown in Figure.8, Lilypad Arduino will distance from the object.
give vibration motor order to alarm with iii. No respond where found on far distance.
continuous vibration when the object is close
from the user. When the object keep away “few” Table.1 contains numerical minutes of the
from the user, the Lilypad Arduino will give experience on the gloves and it shows the results
vibration motor order to alarm with intermittent in cm unity:
vibration. When the object keep away “more” Table 1 Test Results of the Gloves
from the user, the Lilypad Arduino will order a
Obstacle Test
vibration motor to give not vibration. Ultrasonic
will back to send waves to find objects. Then From 0 to 30 cm Hard vibration
repeat the process [9].
From 30 to 60 cm Intermittently vibration
7. RESULTS
From 60 cm to 100 cm Intermittently vibration
21
International Journal of New Computer Architectures and their Applications (IJNCAA) 6(1): 16-22
The Society of Digital Information and Wireless Communications, 2016 ISSN 2220-9085 (Online); ISSN 2412-3587 (Print)
instructions of places and surrounding objects. [5] Chun-Ming Tsai Machine Learning and Cybernetics
The blind will be able to move from side to side (ICMLC), 2012 International Conference on Volume:
5, DOI:10.1109/ICMLC.2012. 6359666,
and from one place to another without the need to Publication, Year: 2012, Page(s): 1901- 1906.
help others to learn about the highs and [6] Ju Lee, Hyun, Mun Suk Choi, and Chung Sei Rhee.
surrounding objects. "Traceability of double spending in secure electronic
This system would be more cash system." In Computer Networks and Mobile
setups sophisticated and assistant for the visually Computing, 2003. ICCNMC 2003. 2003 International
Conference on, pp. 330-333. IEEE, 2003.
impaired around the world, and so high for his [7] Guolu, Wang, Qiu Kaijin, and Chen Yao. "The design
efficiency. In addition to the low-cost of and implementation of a gravity sensor-based mobile
manufacture it, and according to the accuracy of phone for the blind." In Software Engineering and
the sensor. Service Science (ICSESS), 2013 4th IEEE International
In spite of this, it is recommended to Conference on, pp. 570-574. IEEE, 2013.
[8] THONG NGUYEN, D., Hamish Kennedy, and Jeff
restructure the pieces added or commercial Cornwall. "A vocalized color recognition system for the
quality to its requirements to lead properly. In the blind." IEEE transactions on instrumentation and
future, some improvements will be improved to measurement 33, no. 2 (1984): 122-126.
the system in order to meet approbation of users [9] Balachandran, Wamadeva, Franjo Cecelja, and Piotr
such as: Ptasinski. "A GPS based navigation aid for the blind."
In Applied Electromagnetics and Communications,
Increasing other type of sensors as pulse 2003. ICECom 2003. 17th International Conference on,
and temperature sensors. pp. 34-36. IEEE, 2003.
Increasing navigation system and an [10] Chaurasia, Shashank, and K. V. N. Kavitha. "An
internal memory to store all places electronic walking stick for blinds." In Information
Communication and Embedded Systems (ICICES),
and coordinates experienced by the user 2014 International Conference on, pp. 1-5. IEEE, 2014.
Adding automatic synchronization
between the gloves and mobile
phones to be identified on the site by a
person other relatives or theirs.
Adding voice guidance property to
make it easier to stay away from the
process of risk if the vibration were
unavailing.
REFERENCES
22
International Journal of New Computer Architectures and their Applications (IJNCAA) 6(1): 23-33
The Society of Digital Information and Wireless Communications, 2016 ISSN 2220-9085 (Online); ISSN 2412-3587 (Print)
ABSTRACT 1 INTRODUCTION
Smartphones and tablets are becoming dominant Despite the market’s heterogeneity, smart de-
devices in the present market. Apart from offer- vices, wireless broadband, and network-based
ing the same features as traditional mobile phones, cloud computing constitutes a perfect storm of
smart devices provide several other features as well. opportunity for application developers, luring
Designing Smartphones to balance the tradeoff be- their attention towards the new platforms [1].
tween minimum power dissipation and increased Smart devices are becoming dominant devices
performance becomes a challenge. This is due in the present market. Apart from offering
to the fact that users try to optimize for mobility the same features as traditional devices, smart
and lightness rather than for maximum comput- devices provide several other features. Some
ing performance and throughput. In an effort to of these features include amongst others high
study the performance of smart devices, we mod- speed access to Internet using Wi-Fi and mo-
eled the components of smart devices using an bile data network, running several applications
M/M/1//N closed queue system. In this model, & games, capturing & sharing pictures to so-
the number of requests in the system is fixed. The cial platforms, audio-video capturing and shar-
performance of requests under the closed and open ing [2].
system models are compared in terms of average re- The performance limitation of smart devices
sponse time. We observe that average response time originate mainly from the fact that they have
generally increases with increase in arrival rate, and embedded systems [3]. An embedded system
increase in load values. It is further observed that is a computer system made for specific con-
requests experience lower average response time trol functions inside a larger system frequently
under the open system model at low arrival rates, having real-time computing constraints.
low service times and low load compared to closed Normally, embedded systems have different
system model. However, at high arrival rates, high design constraints than desktop computing ap-
service rates and high load values, requests experi- plications. Instead embedded systems try
ence lower response time under the closed system to optimize for mobility and lightness rather
model than under the open system model. The an- than for maximum computing performance
alytical results indicate that closed system models and throughput. Since many users want their
offer better performance at high arrival rates and smartphones to be more powerful, have high
high load values, whereas open system models of- throughput, and at the same time, small and
fer better performance at low arrival rates and low light, it presents a constraint to designers [4].
load values. This constraint acts as a recipe to analyze the
factors that affect system performance, for ex-
KEYWORDS ample, number of processor cores, clock fre-
quency, processor types, load, service rate, ar-
Closed system model, open system model, smart
rival rate and so on.
devices, embedded systems, workload generators
23
International Journal of New Computer Architectures and their Applications (IJNCAA) 6(1): 23-33
The Society of Digital Information and Wireless Communications, 2016 ISSN 2220-9085 (Online); ISSN 2412-3587 (Print)
System researchers are aware of the impor- many applications, like static and dynamic web
tance of representing a system accurately servers, database servers, auctioning sites, and
and this involves accurately representing the supercomputing centers. The study noted that
scheduling of requests, service distributions, there was a huge difference between the two
arrival of requests into systems, etc. Represent- systems. For example, under a fixed load, the
ing systems accurately involves many things, mean response time for an open system model
including accurately representing the schedul- can exceed that for a closed system model by a
ing of requests, service distributions, correla- high magnitude. In addition, while scheduling
tion between requests, etc. One factor that re- to favour short jobs is extremely effective in
searchers pay little attention to is whether the reducing response times in an open system, it
job arrivals obey a closed or an open system has very little effect in a closed system model.
model. The deduction is that variability in the job sizes
In a closed system model, new job arrivals has a higher effect in an open system than a
are only triggered by job completions [3]. In closed one.
addition, in a closed system model, it is as- Bondi et al [14] studied a general network of
sumed that there is some fixed number of FCFS queues and concluded that service vari-
users, who use the system forever. These users ability is more dominant in open systems and
are called the multiprogramming level (MPL) less pronounced in closed systems (provided
[3, 5, 6]. Examples of closed systems include the MPL is not too large). However, this study
TPC-W [8] used as a database benchmark for was primarily restricted to FCFS queues.
e-commerce, RUBiS [9] which is an Auction In a similar study, Schatte et al. [15, 16] stud-
website benchmark, AuthMark [10] which is ied a single FCFS queue in a closed loop with
a Web authentication and authorization bench- think time. The study revealed that, as the MPL
mark, etc. grows to infinity, the closed system converges
On the other hand, in an open system model, monotonically to an open system. This result
new jobs arrive independent of job completions provides a fundamental understanding of the
[11, 12]. In an open system model there is a effect of the MPL parameter.
stream of arriving users and each user is as- M. Choi et al. [1] developed a queuing model
sumed to submit one job to the system, wait to to analyze the performance of the components
receive the response, and then leave. The dif- of Smart devices in terms of average waiting
ferentiating feature of an open system is that time and mean queue length using an open sys-
a request completion does not trigger new re- tem model. The components analyzed include;
quests, instead a new request is only triggered Central processing Unit (CPU), RAM, GSM,
by a new user arriving. LCD, and Graphics. The authors assumed an
The differences between open and closed sys- open queuing system model. Although many
tem models motivate the need for researchers applications can be modeled using an open sys-
to investigate the performance of devices in ei- tem model, open system models do not capture
ther open or closed systems. This background interactivity in systems which is well captured
therefore provides a simple recipe for inves- by closed system models. For example, inter-
tigating the performance of smart devices in active behaviors like human users interacting
closed system models. with a system, threads contending for a lock,
and networked servers waiting for a response
2 RELATED WORK message are so common. In a closed network,
it is possible to model a set of users submitting
Schroeder et al. [3] studied the performance of
requests to a system, waiting for results, and
closed versus open workload generators under
24
International Journal of New Computer Architectures and their Applications (IJNCAA) 6(1): 23-33
The Society of Digital Information and Wireless Communications, 2016 ISSN 2220-9085 (Online); ISSN 2412-3587 (Print)
then submitting more requests. system model, new job arrivals are only trig-
The expressions for average response time and gered by job completions.
number of requests in the system respectively
are derived in [1] and given below:
Average response time, E[S] is given as:
1 ρ
E[S] = = (1)
µ(1 − ρ) λ(1 − ρ)
ρ λ
E[S] = = (2)
(1 − ρ) (µ − λ)
25
International Journal of New Computer Architectures and their Applications (IJNCAA) 6(1): 23-33
The Society of Digital Information and Wireless Communications, 2016 ISSN 2220-9085 (Online); ISSN 2412-3587 (Print)
length.
X
N
N!
.ρj πo = 1 (9)
j=1
(N − j)!
and hence,
" #−1
X
N
N!
πo = .ρj (10)
j=0
(N − j)!
Figure 2. M/M/1//N closed queue system Using little’s law [13], the average queue
length, E[X] is given by;
26
International Journal of New Computer Architectures and their Applications (IJNCAA) 6(1): 23-33
The Society of Digital Information and Wireless Communications, 2016 ISSN 2220-9085 (Online); ISSN 2412-3587 (Print)
E[X]
E[S] = (12) Service rate=1500/second Service
x 10
−3 rate=1800/second
µ(1 − πo ) 0.025 3.5
N − µ(1 − πo ). λ1 N 1 0.01 2
E[S] = = −
µ(1 − πo ) µ(1 − πo ) λ 0.005 1.5
(13) 0 1
1000 1100 1200 1300 1400 1500 1000 1100 1200 1300 1400 1500
where πo is as given in equation 10. Arrival rate, λ (requests/second) Arrival rate, λ (requests/second)
Closed system 0.04 Closed system Arrival rate, λ (requests/second) Arrival rate, λ (requests/second)
2.2
2 0.03
Figure 7. Average queue length against arrival
1.8 0.02 rate.
1.6
0.01
1.4
1.2
1500 1600 1700 1800
0
1500 1600 1700 1800 Figure 7 shows the variation of average queue
Service rate, µ (requests/second) Service rate, µ (requests/second)
length against arrival rate of requests into the
Figure 6. Response time gainst service rate. system. We observe that average queue length
increases with increase in arrival rate regard-
less of the service rate. We further observe
Figure 6 shows the graph of response time that the number of requests in the system un-
against service rate for different arrival rates. der the open system model is lower than under
We observe from the graph that response time the closed system model at lower arrival rates.
decreases with increase in service time. In However, at higher arrival rates, the number of
other words, as the service rate increases, the requests in the system under the open system
response time decreases. Increase in service model is higher than under the closed system
rate implies increase in the number of requests model. Whereas the performance of requests
served per unit time, hence decrease in average under open and closed systems are the same
response time. We also observe that requests at arrival rate of 1100 requests/second when
experience lower response time under the open the service rate is 1500 requests/second, it is
system at low service rate, however at high ser- the same at arrival rate of 1300 requests/second
vice rate, requests experience lower response when the service rate is 1800 requests/second.
Figure 8 shows the graph of average queue
length in the system against the load. We ob-
serve that the number of requests in
28
International Journal of New Computer Architectures and their Applications (IJNCAA) 6(1): 23-33
The Society of Digital Information and Wireless Communications, 2016 ISSN 2220-9085 (Online); ISSN 2412-3587 (Print)
2
and load while varying the number of requests
5
in the system.
0 1
0.7 0.8 0.9 1 0.6 0.7 0.8 0.9
Load, ρ Load, ρ −3
x 10
3.4
2.8
−3 Arrival rate=1000/second −3
x 10 x 10 Arrival rate=1100/second
5.5 5 We observe from figure 13 that average queue
Number of requests=5
5 Number of requests=5
4.5 length generally increases with increase in
Response time (seconds)
5
Number of requests=6
4.5
3.5
5 CASE STUDY FROM FRANCE FOOT-
3
BALL WORLD CUP 1998 TRACE
2.5
We used trace data from Internet sites serv- service rate of 1500 requests/s. Figure 15
ing the 1998 World Cup. The data used are
−3 Service rate, µ=1500requests/second
available from the Internet Traffic Archive (see 3.5
x 10
31
International Journal of New Computer Architectures and their Applications (IJNCAA) 6(1): 23-33
The Society of Digital Information and Wireless Communications, 2016 ISSN 2220-9085 (Online); ISSN 2412-3587 (Print)
Closed system
tems at lower arrival rates, however at higher
−2
10
arrival rates, requests experience lower re-
sponse time under closed systems. It is fur-
ther observed that the performance of requests
under open and closed systems are the same
1200 1300 1400 1500 1600
Arrival rate, λ (requests/second)
1700 1800
when the arrival rate is approximately 6500
requests/s, after the arrival rate of 6500 re-
Figure 16. Response time against arrival rate. quests/s, requests perform better under closed
systems.
Closed system
queue length which generally increases with
−3
10 increase in arrival rate, and increase with in-
crease in load values. The average queue
length however decreases with increase in the
service rate. It is further observed that requests
2000 3000 4000 5000 6000 7000
Arrival rate, λ (requests/second)
8000 9000
experience lower average queue length under
the open system model at low arrival rate, low
Figure 17. Response time against arrival rate. service rate and low load. However, at high ar-
rival rate, high service rate and high load val-
ues, requests experience lower average queue
response time against arrival rate of requests length. Therefore, closed system models is ob-
into the system under heavy load. We observe served to offer better performance at high ar-
that average response time generally increases rival rates and high load values than open
with increase in arrival rate.
32
International Journal of New Computer Architectures and their Applications (IJNCAA) 6(1): 23-33
The Society of Digital Information and Wireless Communications, 2016 ISSN 2220-9085 (Online); ISSN 2412-3587 (Print)
The International Journal of New Computer Architectures and Their Applications aims to provide a
forum for scientists, engineers, and practitioners to present their latest research results, ideas,
developments and applications in the field of computer architectures, information technology, and
mobile technologies. The IJNCAA is published four times a year and accepts three types of papers as
follows:
1. Research papers: that are presenting and discussing the latest, and the most profound
research results in the scope of IJNCAA. Papers should describe new contributions in the
scope of IJNCAA and support claims of novelty with citations to the relevant literature.
2. Technical papers: that are establishing meaningful forum between practitioners and
researchers with useful solutions in various fields of digital security and forensics. It includes
all kinds of practical applications, which covers principles, projects, missions, techniques,
tools, methods, processes etc.
3. Review papers: that are critically analyzing past and current research trends in the field.
Manuscripts submitted to IJNCAA should not be previously published or be under review by any other
publication. Plagiarism is a serious academic offense and will not be tolerated in any sort! Any case of
plagiarism would lead to life-time abundance of all authors for publishing in any of our journals or
conferences.
Original unpublished manuscripts are solicited in the following areas including but not limited to:
Computer Architectures
Parallel and Distributed Systems
Storage Management
Microprocessors and Microsystems
Communications Management
Reliability
VLSI