Professional Documents
Culture Documents
ABSTRACT
DATA MINING
DM16NXT01 TITLE: Cost Minimization for Rule Caching in Software Defined Networking
ABSTRACT: Software-defined networking (SDN) is an emerging network
paradigm that simplifies network management by decoupling the control plane
and data plane, such that switches become simple data forwarding devices and
network management is controlled by logically centralized servers. In SDN-
enabled networks, network flow is managed by a set of associated rules that are
maintained by switches in their local Ternary Content Addressable Memories
(TCAMs) which support high-speed parallel lookup on wildcard patterns. Since
TCAM is an expensive hardware and extremely power-hungry, each switch has
only limited TCAM space and it is inefficient and even infeasible to maintain all
rules at local switches. On the other hand, if we eliminate TCAM occupation by
forwarding all packets to the centralized controller for processing, it results in a
long delay and heavy processing burden on the controller. In this paper, we
strive for the fine balance between rule caching and remote packet processing
by formulating a minimum weighted flow provisioning ( MWFP) problem with an
objective of minimizing the total cost of TCAM occupation and remote packet
processing. We propose an efficient offline algorithm if the network traffic is
given, otherwise, we propose two online algorithms with guaranteed
competitive ratios.
DM16NXT02 TITLE: On Binary Decomposition based Privacy-Preserving Aggregation
Schemes in Real-time Monitoring Systems
ABSTRACT: In real-time monitoring systems, fine-grained measurements would
pose great privacy threats to the participants as real-time measurements could
disclose accurate people-centric activities. Differential privacy has been
proposed to formalize and guide the design of privacy-preserving schemes.
Nonetheless, due to the correlations and high fluctuations in time-series data, it
is hard to achieve an effective privacy and utility tradeoff by differential privacy
mechanisms. To address this issue, in this paper, we first proposed novel multi-
dimensional decomposition based schemes to compress the noise and enhance
the utility in differential privacy. The key idea is to decompose the
measurements into multi-dimensional records and to achieve differential privacy
in bounded dimensions so that the error caused by unbounded measurements
can be significantly reduced. We then extended our developed scheme and
developed a binary decomposition scheme for privacy-preserving time-series
aggregation in real-time monitoring systems. Through a combination of
extensive theoretical analysis and experiments, our data shows that our
proposed schemes can effectively improve usability while achieving the same
level of differential privacy than existing schemes.
DM16NXT03 TITLE: An OpenMP Extension that Supports Thread-Level Speculation
ABSTRACT: OpenMP directives are the de-facto standard for shared-memory
parallel programming. However, OpenMP does not guarantee the correctness of
the parallel execution of a given loop if runtime data dependences arise.
Consequently, many highly-parallel regions cannot be safely parallelized with
OpenMP due to the possibility of a dependence violation. In this paper, we
propose to augment OpenMP capabilities, by adding thread-level speculation
(TLS) support. Our contribution is threefold. First, we have defined a new
speculative clause for variables inside parallel loops. This clause ensures that all
accesses to these variables will be carried out according to sequential semantics.
Second, we have created a new, software-based TLS runtime library to ensure
correctness in the parallel execution of OpenMP loops that include speculative
variables. Third, we have developed a new GCC plugin, which seamlessly
translates our OpenMP speculative clause into calls to our TLS runtime engine.
The result is the ATLaS C Compiler framework, which takes advantage of TLS
techniques to expand OpenMP functionalities, and guarantees the sequential
semantics of any parallelized loop.
DM16NXT04 TITLE: Real-Time Semantic Search Using Approximate Methodology for Large-
Scale Storage Systems
ABSTRACT: The challenges of handling the explosive growth in data volume and
complexity cause the increasing needs for semantic queries. The semantic
queries can be interpreted as the correlation-aware retrieval, while containing
approximate results. Existing cloud storage systems mainly fail to offer an
adequate capability for the semantic queries. Since the true value or worth of
data heavily depends on how efficiently semantic search can be carried out on
the data in (near-) real-time, large fractions of data end up with their values
being lost or significantly reduced due to the data staleness. To address this
problem, we propose a near-real-time and cost-effective semantic queries based
methodology, called FAST. The idea behind FAST is to explore and exploit the
semantic correlation within and among datasets via correlation-aware hashing
and manageable flat-structured addressing to significantly reduce the processing
latency, while incurring acceptably small loss of data-search accuracy. The near-
real-time property of FAST enables rapid identification of correlated files and the
significant narrowing of the scope of data to be processed. FAST supports several
types of data analytics, which can be implemented in existing searchable storage
systems. We conduct a real-world use case in which children reported missing in
an extremely crowded environment (e.g., a highly popular scenic spot on a peak
tourist day) are identified in a timely fashion by analyzing 60 million images using
FAST. FAST is further improved by using semantic-aware namespace to provide
dynamic and adaptive namespace management for ultra-large storage systems.
Extensive experimental results demonstrate the efficiency and efficacy of FAST in
the performance improvements.
DM16NXT05 TITLE: Incremental and Decremental Max-flow for Online Semi-supervised
Learning
ABSTRACT: In classification, if a small number of instances is added or removed,
incremental and decremental techniques can be applied to quickly update the
model. However, the design of incremental and decremental algorithms involves
many considerations. In this paper, we focus on linear classifiers including
logistic regression and linear SVM because of their simplicity over kernel or other
methods. By applying a warm start strategy, we investigate issues such as using
primal or dual formulation, choosing optimization methods, and creating
practical implementations. Through theoretical analysis and practical
experiments, we conclude that a warm start setting on a high-order optimization
method for primal formulations is more suitable than others for incremental and
decremental learning of linear classification.
DM16NXT06 TITLE: Personalized Influential Topic Search via Social Network Summarization
ABSTRACT: Social networks are a vital mechanism to disseminate information to
friends and colleagues. In this work, we investigate an important problem - the
personalized influential topic search, or PIT-Search in a social network: Given a
keyword query q issued by a user u in a social network, a PIT-Search is to find the
top-k q-related topics that are most influential for the query user u. The
influence of a topic to a query user depends on the social connection between
the query user and the social users containing the topic in the social network. To
measure the topics’ influence at the similar granularity scale, we need to extract
the social summarization of the social network regarding topics. To make
effective topic-aware social summarization, we propose two random-walk based
approaches: random clustering and an L-length random walk. Based on the
proposed approaches, we can find a small set of representative users with
assigned influential scores to simulate the influence of the large number of topic
users in the social network with regards to the topic. The selected representative
users are denoted as the social summarization of topic-aware influence spread
over the social network. And then, we verify the usefulness of the social
summarization by applying it to the problem of personalized influential topic
search. Finally, we evaluate the performance of our algorithms using real-world
datasets, and show the approach is efficient and effective in practice.
DM16NXT07 TITLE: Survey on Aspect-Level Sentiment Analysis
ABSTRACT: The field of sentiment analysis, in which sentiment is gathered,
analyzed, and aggregated from text, has seen a lot of attention in the last few
years. The corresponding growth of the field has resulted in the emergence of
various subareas, each addressing a different level of analysis or research
question. This survey focuses on aspect-level sentiment analysis, where the goal
is to find and aggregate sentiment on entities mentioned within documents or
aspects of them. An in-depth overview of the current state-of-the-art is given,
showing the tremendous progress that has already been made in finding both
the target, which can be an entity as such, or some aspect of it, and the
corresponding sentiment. Aspect-level sentiment analysis yields very fine-
grained sentiment information which can be useful for applications in various
domains. Current solutions are categorized based on whether they provide a
method for aspect detection, sentiment analysis, or both. Furthermore, a
breakdown based on the type of algorithm used is provided. For each discussed
study, the reported performance is included. To facilitate the quantitative
evaluation of the various proposed methods, a call is made for the
standardization of the evaluation methodology that includes the use of shared
data sets. Semanticallyrich concept-centric aspect-level sentiment analysis is
discussed and identified as one of the most promising future research direction.
DM16NXT08 TITLE: Multilabel Classification via Co-evolutionary Multilabel Hypernetwork
ABSTRACT: Multilabel classification is prevalent in many real-world applications
where data instances may be associated with multiple labels simultaneously. In
multilabel classification, exploiting label correlations is an essential but nontrivial
task. Most of the existing multilabel learning algorithms are either ineffective or
computational demanding and less scalable in exploiting label correlations. In
this paper, we propose a co-evolutionary multilabel hypernetwork (Co-MLHN) as
an attempt to exploit label correlations in an effective and efficient way. To this
end, we firstly convert the traditional hypernetwork into a multilabel
hypernetwork (MLHN) where label correlations are explicitly represented. We
then propose a co-evolutionary learning algorithm to learn an integrated
classification model for all labels. The proposed Co-MLHN exploits arbitrary order
label correlations and has linear computational complexity with respect to the
number of labels. Empirical studies on a broad range of multilabel data sets
demonstrate that Co-MLHN achieves competitive results against state-of-the-art
multilabel learning algorithms, in terms of both classification performance and
scalability with respect to the number of labels.
DM16NXT09 TITLE: Answering Pattern Queries Using Views
ABSTRACT: Answering queries using views has proven effective for querying
relational and semistructured data. This paper investigates this issue for graph
pattern queries based on graph simulation. We propose a notion of pattern
containment to characterize graph pattern matching using graph pattern views.
We show that a pattern query can be answered using a set of views if and only if
it is contained in the views. Based on this characterization, we develop efficient
algorithms to answer graph pattern queries. We also study problems for
determining (minimal, minimum) containment of pattern queries. We establish
their complexity (from cubic-time to NPcomplete) and provide efficient checking
algorithms (approximation when the problem is intractable). In addition, when a
pattern query is not contained in the views, we study maximally contained
rewriting to find approximate answers; we show that it is in cubic-time to
compute such rewriting, and present a rewriting algorithm. We experimentally
verify that these methods are able to efficiently answer pattern queries on large
real-world graphs.
DM16NXT10 TITLE: Similarity Measure Selection for Clustering Time Series Databases
ABSTRACT: In the past few years, clustering has become a popular task
associated with time series. The choice of a suitable distance measure is crucial
to the clustering process and, given the vast number of distance measures for
time series available in the literature and their diverse characteristics, this
selection is not straightforward. With the objective of simplifying this task, we
propose a multi-label classification framework that provides the means to
automatically select the most suitable distance measures for clustering a time
series database. This classifier is based on a novel collection of characteristics
that describe the main features of the time series databases and provide the
predictive information necessary to discriminate between a set of distance
measures. In order to test the validity of this classifier, we conduct a complete
set of experiments using both synthetic and real time series databases and a set
of five common distance measures. The positive results obtained by the
designed classification framework for various performance measures indicate
that the proposed methodology is useful to simplify the process of distance
selection in time series clustering tasks.
DM16NXT11 TITLE: A Novel Recommendation Model Regularized with User Trust and Item
Ratings
ABSTRACT: We address the problem of finding query facets which are multiple
groups of words or phrases that explain and summarize the content covered by a
query. We assume that the important aspects of a query are usually presented
and repeated in the query’s top retrieved documents in the style of lists, and
query facets can be mined out by aggregating these significant lists. We propose
a systematic solution, which we refer to as QDMiner, to automatically mine
query facets by extracting and grouping frequent lists from free text, HTML tags,
and repeat regions within top search results. Experimental results show that a
large number of lists do exist and useful query facets can be mined by QDMiner.
We further analyze the problem of list duplication, and find better query facets
can be mined by modeling fine-grained similarities between lists and penalizing
the duplicated lists.
DM16NXT13 TITLE: Booster in High Dimensional Data Classification
ABSTRACT: Data mining provides an automatic and effective way for reservoir
evaluation based on well logging interpretation. In this paper, we propose a
framework of data mining for logging reservoir evaluation and verify its
performance on two well logging dataset. Experimental results show that the
conclusions of well-logging interpretation by the framework proposed in this
paper are consistent with the test oil results. Compared with traditional
evaluation methods, it is efficient and not so dependent on expertise. This
framework provides a reference for the construction of big data platform for oil
exploration and development.
DM16NXT23 TITLE: Research and Implementation of the Positioning System for Coal Mining
Staff Based on Random Forests
ABSTRACT: Nowadays, health disease are increasing day by day due to life style,
hereditary. Especially, heart disease has become more common these days, i.e.
life of people is at risk. Each individual has different values for Blood pressure,
cholesterol and pulse rate. But according to medically proven results the normal
values of Blood pressure is 120/90, cholesterol is and pulse rate is 72. This paper
gives the survey about different classification techniques used for predicting the
risk level of each person based on age, gender, Blood pressure, cholesterol, pulse
rate. The patient risk level is classified using data mining classification techniques
such as Na¿¿ve Bayes, KNN, Decision Tree Algorithm, Neural Network. etc.,
Accuracy of the risk level is high when using more number of attributes.
DM16NXT25 TITLE: Framework for data model to personalized health systems
IM16NXT04 TITLE: Microwave Unmixing With Video Segmentation for Inferring Broadleaf
and Needleleaf Brightness Temperatures and Abundances From Mixed Forest
Observations
ABSTRACT: Passive microwave sensors have better capability of penetrating
forest layers to obtain more information from forest canopy and ground surface.
For forest management, it is useful to study passive microwave signals from
forests. Passive microwave sensors can detect signals from needleleaf,
broadleaf, and mixed forests. The observed brightness temperature of a mixed
forest can be approximated by a linear combination of the needleleaf and
broadleaf brightness temperatures weighted by their respective abundances. For
a mixed forest observed by an N-band microwave radiometer with horizontal
and vertical polarizations, there are 2 N observed brightness temperatures. It is
desirable to infer 4 N + 2 unknowns: 2 N broadleaf brightness temperatures, 2 N
needleleaf brightness temperatures, 1 broadleaf abundance, and 1 needleleaf
abundance. This is a challenging underdetermined problem. In this paper, we
devise a novel method that combines microwave unmixing with video
segmentation for inferring broadleaf and needleleaf brightness temperatures
and abundances from mixed forests. We propose an improved Otsu method for
video segmentation to infer broadleaf and needleleaf abundances. The
brightness temperatures of needleleaf and broadleaf trees can then be solved by
the nonnegative least squares solution. For our mixed forest unmixing problem,
it turns out that the ordinary least squares solution yields the desired positive
brightness temperatures. The experimental results demonstrate that the
proposed method is able to unmix broadleaf and needleleaf brightness
temperatures and abundances well. The absolute differences between the
reconstructed and observed brightness temperatures of the mixed forest are
well within 1 K.
IM16NXT05 TITLE: Spectral–Spatial Adaptive Sparse Representation for Hyperspectral
Image Denoising
ABSTRACT: In this paper, a novel spectral-spatial adaptive sparse representation
(SSASR) method is proposed for hyperspectral image (HSI) denoising. The
proposed SSASR method aims at improving noise-free estimation for noisy HSI by
making full use of highly correlated spectral information and highly similar
spatial information via sparse representation, which consists of the following
three steps. First, according to spectral correlation across bands, the HSI is
partitioned into several nonoverlapping band subsets. Each band subset contains
multiple continuous bands with highly similar spectral characteristics. Then,
within each band subset, shape-adaptive local regions consisting of spatially
similar pixels are searched in spatial domain. This way, spectral-spatial similar
pixels can be grouped. Finally, the highly correlated and similar spectral-spatial
information in each group is effectively used via the joint sparse coding, in order
to generate better noise-free estimation. The proposed SSASR method is
evaluated by different objective metrics in both real and simulated experiments.
The numerical and visual comparison results demonstrate the effectiveness and
superiority of the proposed method.
IM16NXT06 TITLE: A Decomposition Framework for Image Denoising Algorithms
ABSTRACT: In this paper, we consider an image decomposition model that
provides a novel framework for image denoising. The model computes the
components of the image to be processed in a moving frame that encodes its
local geometry (directions of gradients and level lines). Then, the strategy we
develop is to denoise the components of the image in the moving frame in order
to preserve its local geometry, which would have been more affected if
processing the image directly. Experiments on a whole image database tested
with several denoising methods show that this framework can provide better
results than denoising the image directly, both in terms of Peak signal-to-noise
ratio and Structural similarity index metrics.
IM16NXT07 TITLE: Scalable Feature Matching by Dual Cascaded Scalar Quantization for
Image Retrieval
ABSTRACT: In this paper, we investigate the problem of scalable visual feature
matching in large-scale image search and propose a novel cascaded scalar
quantization scheme in dual resolution. We formulate the visual feature
matching as a range-based neighbor search problem and approach it by
identifying hyper-cubes with a dual-resolution scalar quantization strategy.
Specifically, for each dimension of the PCA-transformed feature, scalar
quantization is performed at both coarse and fine resolutions. The scalar
quantization results at the coarse resolution are cascaded over multiple
dimensions to index an image database. The scalar quantization results over
multiple dimensions at the fine resolution are concatenated into a binary super-
vector and stored into the index list for efficient verification. The proposed
cascaded scalar quantization (CSQ) method is free of the costly visual codebook
training and thus is independent of any image descriptor training set. The index
structure of the CSQ is flexible enough to accommodate new image features and
scalable to index large-scale image database. We evaluate our approach on the
public benchmark datasets for large-scale image retrieval. Experimental results
demonstrate the competitive retrieval performance of the proposed method
compared with several recent retrieval algorithms on feature quantization.
IM16NXT08 TITLE: Rotation Invariant Texture Description Using Symmetric Dense
Microblock Difference
ABSTRACT: This letter is devoted to the problem of rotation invariant texture
classification. Novel rotation invariant feature, symmetric dense microblock
difference (SDMD), is proposed which captures the information at different
orientations and scales. N -fold symmetry is introduced in the feature design
configuration, while retaining the random structure that provides discriminative
power. The symmetry is utilized to achieve a rotation invariance. The SDMD is
extracted using an image pyramid and encoded by the Fisher vector approach
resulting in a descriptor which captures variations at different resolutions
without increasing the dimensionality. The proposed image representation is
combined with the linear SVM classifier. Extensive experiments are conducted
on four texture data sets [Brodatz, UMD, UIUC, and Flickr material data set
(FMD)] using standard protocols. The results demonstrate that our approach
outperforms the state of the art in texture classification.
IM16NXT09 TITLE: PiCode: a New Picture-Embedding 2D Barcode
IM16NXT10 TITLE: OCR Based Feature Extraction and Template Matching Algorithms for
Qatari Number Plate
ABSTRACT: There are several algorithms and methods that could be applied to
perform the character recognition stage of an automatic number plate
recognition system; however, the constraints of having a high recognition rate
and real-time processing should be taken into consideration. In this paper four
algorithms applied to Qatari number plates are presented and compared. The
proposed algorithms are based on feature extraction (vector crossing, zoning,
combined zoning and vector crossing) and template matching techniques. All
four proposed algorithms have been implemented and tested using MATLAB. A
total of 2790 Qatari binary character images were used to test the algorithms.
Template matching based algorithm showed the highest recognition rate of
99.5% with an average time of 1.95 ms per character.
IM16NXT11 TITLE: A Hands-on Application-Based Tool for STEM Students to Understand
Differentiation
IM16NXT12 TITLE: Noise Power Spectrum Measurements in Digital Imaging With Gain
Nonuniformity Correction
ABSTRACT: The noise power spectrum (NPS) of an image sensor provides the
spectral noise properties needed to evaluate sensor performance. Hence,
measuring an accurate NPS is important. However, the fixed pattern noise from
the sensor's nonuniform gain inflates the NPS, which is measured from images
acquired by the sensor. Detrending the low-frequency fixed pattern is
traditionally used to accurately measure NPS. However, detrending methods
cannot remove high-frequency fixed patterns. In order to efficiently correct the
fixed pattern noise, a gain-correction technique based on the gain map can be
used. The gain map is generated using the average of uniformly illuminated
images without any objects. Increasing the number of images n for averaging can
reduce the remaining photon noise in the gain map and yield accurate NPS
values. However, for practical finite n, the photon noise also significantly inflates
NPS. In this paper, a nonuniform-gain image formation model is proposed and
the performance of the gain correction is theoretically analyzed in terms of the
signal-to-noise ratio (SNR). It is shown that the SNR is O (√n). An NPS
measurement algorithm based on the gain map is then proposed for any given n.
Under a weak nonuniform gain assumption, another measurement algorithm
based on the image difference is also proposed. For real radiography image
detectors, the proposed algorithms are compared with traditional detrending
and subtraction methods, and it is shown that as few as two images (n = 1) can
provide an accurate NPS because of the compensation constant (1 + 1/n).
IM16NXT13 TITLE: A DCT-based Total JND Profile for Spatio-Temporal and Foveated
Masking Effects
IM16NXT17 TITLE: Comprehensive and Practical Vision System for Self-Driving Vehicle Lane-
Level Localization
IM16NXT18 TITLE: Robust Blur Kernel Estimation for License Plate Images From Fast Moving
Vehicles
IM16NXT21 TITLE: Joint Low-Rank and Sparse Principal Feature Coding for Enhanced Robust
Representation and Visual Classification
ABSTRACT: This paper presents new method for classification of type of image
degradation based on the Riesz transform and BRISQUE no-reference quality
measure. Riesz transform has great properties and it can be used in many
applications. Some of its benefits are: the ability to construct family of steerable
wavelets with arbitrary order and any number of dimensions and it can bring the
algorithm of filter banks with perfect reconstruction and also go to dimensions
higher than two. Statistical properties of MSCN coefficients used by BRISQUE
change in presence of distortion and by quantifying this changes with features
calculated by using GGD and AGGD model the class of distortion can be
determined. We calculated 18 statistical features out of spatial coefficients
defined by BRISQUE measure and 19 parameters out of Riesz coefficients to get
37 features in total and then used features as input in SVM regressor in order to
identify the type of image degradation. Then, we compared new method with
BRISQUE method by using McNemar's statistical test to show statistical
significance of our method.
ABSTRACT: Fundus images are suitable for automatic evaluation in the current
clinical practice. The detection and localization of important areas such as the
vascular bed, yellow spot (fovea) and optical disc are necessary for following
pathological changes detection. After image preprocessing we use very rapid
method based on gradients called Fast radial symmetry transform - FRST to find
approximately circular (elliptical) shape of optic disc. We compare the results
achieved by this method with results of two other detecting methods for circular
to elliptical objects, Hough transform and Histogram method using template.
CLOUD COMPUTING
CC16NXT01 TITLE: Dynamic and Public Auditing with Fair Arbitration for Cloud Data
ABSTRACT: Cloud users no longer physically possess their data, so how to ensure
the integrity of their outsourced data becomes a challenging task. Recently
proposed schemes such as “provable data possession” and “proofs of
retrievability” are designed to address this problem, but they are designed to
audit static archive data and therefore lack of data dynamics support. Moreover,
threat models in these schemes usually assume an honest data owner and focus
on detecting a dishonest cloud service provider despite the fact that clients may
also misbehave. This paper proposes a public auditing scheme with data
dynamics support and fairness arbitration of potential disputes. In particular, we
design an index switcher to eliminate the limitation of index usage in tag
computation in current schemes and achieve efficient handling of data dynamics.
To address the fairness problem so that no party can misbehave without being
detected, we further extend existing threat models and adopt signature
exchange idea to design fair arbitration protocols, so that any possible dispute
can be fairly settled. The security analysis shows our scheme is provably secure,
and the performance evaluation demonstrates the overhead of data dynamics
and dispute arbitration are reasonable.
CC16NXT02 TITLE: Enabling Cloud Storage Auditing with Verifiable Outsourcing of Key
Updates
ABSTRACT: Key-exposure resistance has always been an important issue for in-
depth cyber defence in many security applications. Recently, how to deal with
the key exposure problem in the settings of cloud storage auditing has been
proposed and studied. To address the challenge, existing solutions all require the
client to update his secret keys in every time period, which may inevitably bring
in new local burdens to the client, especially those with limited computation
resources, such as mobile phones. In this paper, we focus on how to make the
key updates as transparent as possible for the client and propose a new
paradigm called cloud storage auditing with verifiable outsourcing of key
updates. In this paradigm, key updates can be safely outsourced to some
authorized party, and thus the key-update burden on the client will be kept
minimal. In particular, we leverage the third party auditor (TPA) in many existing
public auditing designs, let it play the role of authorized party in our case, and
make it in charge of both the storage auditing and the secure key updates for
key-exposure resistance. In our design, TPA only needs to hold an encrypted
version of the client's secret key while doing all these burdensome tasks on
behalf of the client. The client only needs to download the encrypted secret key
from the TPA when uploading new files to cloud. Besides, our design also equips
the client with capability to further verify the validity of the encrypted secret
keys provided by the TPA. All these salient features are carefully designed to
make the whole auditing procedure with key exposure resistance as transparent
as possible for the client. We formalize the definition and the security model of
this paradigm. The security proof and the performance simulation show that our
detailed design instantiations are secure and efficient.
CC16NXT03 TITLE: Providing User Security Guarantees in Public Infrastructure Clouds
ABSTRACT: The infrastructure cloud (IaaS) service model offers improved
resource flexibility and availability, where tenants – insulated from the minutiae
of hardware maintenance – rent computing resources to deploy and operate
complex systems. Large-scale services running on IaaS platforms demonstrate
the viability of this model; nevertheless, many organizations operating on
sensitive data avoid migrating operations to IaaS platforms due to security
concerns. In this paper, we describe a framework for data and operation security
in IaaS, consisting of protocols for a trusted launch of virtual machines and
domain-based storage protection. We continue with an extensive theoretical
analysis with proofs about protocol resistance against attacks in the defined
threat model. The protocols allow trust to be established by remotely attesting
host platform configuration prior to launching guest virtual machines and ensure
confidentiality of data in remote storage, with encryption keys maintained
outside of the IaaS domain. Presented experimental results demonstrate the
validity and efficiency of the proposed protocols. The framework prototype was
implemented on a test bed operating a public electronic health record system,
showing that the proposed protocols can be integrated into existing cloud
environments.
CC16NXT04 TITLE: Attribute-Based Data Sharing Scheme Revisited in Cloud Computing
ABSTRACT: Cipher-text-policy attribute-based encryption (CP-ABE) is a very
promising encryption technique for secure data sharing in the context of cloud
computing. Data owner is allowed to fully control the access policy associated
with his data which to be shared. However, CP-ABE is limited to a potential
security risk that is known as key escrow problem, whereby the secret keys of
users have to be issued by a trusted key authority. Besides, most of the existing
CP-ABE schemes cannot support attribute with arbitrary state. In this paper, we
revisit attribute-based data sharing scheme in order to solve the key escrow
issue but also improve the expressiveness of attribute, so that the resulting
scheme is more friendly to cloud computing applications. We propose an
improved two-party key issuing protocol that can guarantee that neither key
authority nor cloud service provider can compromise the whole secret key of a
user individually. Moreover, we introduce the concept of attribute with weight,
being provided to enhance the expression of attribute, which can not only
extend the expression from binary to arbitrary state, but also lighten the
complexity of access policy. Therefore, both storage cost and encryption
complexity for a ciphertext are relieved. The performance analysis and the
security proof show that the proposed scheme is able to achieve efficient and
secure data sharing in cloud computing.
CC16NXT05 TITLE: An Efficient File Hierarchy Attribute-Based Encryption Scheme in Cloud
Computing
ABSTRACT: Ciphertext-policy attribute-based encryption (CP-ABE) has been a
preferred encryption technology to solve the challenging problem of secure data
sharing in cloud computing. The shared data files generally have the
characteristic of multilevel hierarchy, particularly in the area of healthcare and
the military. However, the hierarchy structure of shared files has not been
explored in CP-ABE. In this paper, an efficient file hierarchy attribute-based
encryption scheme is proposed in cloud computing. The layered access
structures are integrated into a single access structure, and then, the hierarchical
files are encrypted with the integrated access structure. The ciphertext
components related to attributes could be shared by the files. Therefore, both
ciphertext storage and time cost of encryption are saved. Moreover, the
proposed scheme is proved to be secure under the standard assumption.
Experimental simulation shows that the proposed scheme is highly efficient in
terms of encryption and decryption. With the number of the files increasing, the
advantages of our scheme become more and more conspicuous.
CC16NXT06 TITLE: Identity-Based Proxy-Oriented Data Uploading and Remote Data
Integrity Checking in Public Cloud
ABSTRACT: More and more clients would like to store their data to public cloud
servers (PCSs) along with the rapid development of cloud computing. New
security problems have to be solved in order to help more clients process their
data in public cloud. When the client is restricted to access PCS, he will delegate
its proxy to process his data and upload them. On the other hand, remote data
integrity checking is also an important security problem in public cloud storage.
It makes the clients check whether their outsourced data are kept intact without
downloading the whole data. From the security problems, we propose a novel
proxy-oriented data uploading and remote data integrity checking model in
identity-based public key cryptography: identity-based proxy-oriented data
uploading and remote data integrity checking in public cloud (ID-PUIC). We give
the formal definition, system model, and security model. Then, a concrete ID-
PUIC protocol is designed using the bilinear pairings. The proposed ID-PUIC
protocol is provably secure based on the hardness of computational Diffie-
Hellman problem. Our ID-PUIC protocol is also efficient and flexible. Based on
the original client's authorization, the proposed ID-PUIC protocol can realize
private remote data integrity checking, delegated remote data integrity
checking, and public remote data integrity checking.
CC16NXT07 TITLE: Outsourcing Eigen-Decomposition and Singular Value Decomposition of
Large Matrix to a Public Cloud
ABSTRACT: Cloud computing enables customers with limited computational
resources to outsource their huge computation workloads to the cloud with
massive computational power. However, in order to utilize this computing
paradigm, it presents various challenges that need to be addressed, especially
security. As eigen-decomposition (ED) and singular value decomposition (SVD) of
a matrix are widely applied in engineering tasks, we are motivated to design
secure, correct, and efficient protocols for outsourcing the ED and SVD of a
matrix to a malicious cloud in this paper. In order to achieve security, we employ
efficient privacy-preserving transformations to protect both the input and output
privacy. In order to check the correctness of the result returned from the cloud,
an efficient verification algorithm is employed. A computational complexity
analysis shows that our protocols are highly efficient. We also introduce an
outsourcing principle component analysis as an application of our two proposed
protocols.
CC16NXT08 TITLE: Performance Limitations of A Text Search Application Running in Cloud
Instances
ABSTRACT: This article analyzes the performance of MySQL in clouds based on
commodity hardware in order to identify the bottlenecks in the execution of
series of scripts developed on the SQL standard. The developed scripts were
designed in order to perform text search in a considerable amount of records.
Two types of platforms were employed: a physical machine that serves as host
and an instance within a cloud infrastructure. The results show that the intensive
use of a relational database presents a greater loss of performance in a cloud
instance due limitations in the primary storage system that was employed in the
cloud infrastructure.
CC16NXT09 TITLE: A Dynamic Load Balancing Method of Cloud-Center Based on SDN
ABSTRACT: In order to achieve dynamic load balancing based on data flow level,
in this paper, we apply SDN technology to the cloud data center, and propose a
dynamic load balancing method of cloud center based on SDN. The approach of
using the SDN technology in the current task scheduling flexibility, accomplish
real-time monitoring of the service node flow and load condition by the
OpenFlow protocol. When the load of system is imbalanced, the controller can
allocate globally network resources. What's more, by using dynamic correction,
the load of the system is not obvious tilt in the long run. The results of simulation
show that this approach can realize and ensure that the load will not tilt over a
long period of time, and improve the system throughput.
CC16NXT10 TITLE: Attribute-Based Access Control for Multi-Authority Systems with
Constant Size Ciphertext in Cloud Computing
ABSTRACT: In most existing CP-ABE schemes, there is only one authority in the
system and all the public keys and private keys are issued by this authority,
which incurs ciphertext size and computation costs in the encryption and
decryption operations that depend at least linearly on the number of attributes
involved in the access policy. We propose an efficient multi-authority CP-ABE
scheme in which the authorities need not interact to generate public information
during the system initialization phase. Our scheme has constant ciphertext
length and a constant number of pairing computations. Our scheme can be
proven CPA-secure in random oracle model under the decision q-BDHE
assumption. When user's attributes revocation occurs, the scheme transfers
most re-encryption work to the cloud service provider, reducing the data
owner's computational cost on the premise of security. Finally the analysis and
simulation result show that the schemes proposed in this thesis ensure the
privacy and secure access of sensitive data stored in the cloud server, and be
able to cope with the dynamic changes of users' access privileges in large-scale
systems. Besides, the multi-authority ABE eliminates the key escrow problem,
achieves the length of ciphertext optimization and enhances the efficiency of the
encryption and decryption operations.
CC16NXT11 TITLE: Dynamic Certification of Cloud Services: Trust, but Verify!
ABSTRACT: Although intended to ensure cloud service providers' security,
reliability, and legal compliance, current cloud service certifications are quickly
outdated. Dynamic certification, on the other hand, provides automated
monitoring and auditing to verify cloud service providers' ongoing adherence to
certification requirements.
CC16NXT12 TITLE: Auditing a Cloud Provider’s Compliance With Data Backup
Requirements: A Game Theoretical Analysis
ABSTRACT: The new developments in cloud computing have introduced
significant security challenges to guarantee the confidentiality, integrity, and
availability of outsourced data. A service level agreement (SLA) is usually signed
between the cloud provider (CP) and the customer. For redundancy purposes, it
is important to verify the CP’s compliance with data backup requirements in the
SLA. There exist a number of security mechanisms to check the integrity and
availability of outsourced data. This task can be performed by the customer or
be delegated to an independent entity that we will refer to as the verifier.
However, checking the availability of data introduces extra costs, which can
discourage the customer of performing data verification too often. The
interaction between the verifier and the CP can be captured using game theory
in order to find an optimal data verification strategy. In this paper, we formulate
this problem as a two player non-cooperative game. We consider the case in
which each type of data is replicated a number of times, which can depend on a
set of parameters including, among others, its size and sensitivity. We analyze
the strategies of the CP and the verifier at the Nash equilibrium and derive the
expected behavior of both the players. Finally, we validate our model
numerically on a case study and explain how we evaluate the parameters in the
model.
CC16NXT13 TITLE: An Efficient Privacy-Preserving Ranked Keyword Search Method
ABSTRACT: Cloud data owners prefer to outsource documents in an encrypted
form for the purpose of privacy preserving. Therefore it is essential to develop
efficient and reliable ciphertext search techniques. One challenge is that the
relationship between documents will be normally concealed in the process of
encryption, which will lead to significant search accuracy performance
degradation. Also the volume of data in data centers has experienced a dramatic
growth. This will make it even more challenging to design ciphertext search
schemes that can provide efficient and reliable online information retrieval on
large volume of encrypted data. In this paper, a hierarchical clustering method is
proposed to support more search semantics and also to meet the demand for
fast ciphertext search within a big data environment. The proposed hierarchical
approach clusters the documents based on the minimum relevance threshold,
and then partitions the resulting clusters into sub-clusters until the constraint on
the maximum size of cluster is reached. In the search phase, this approach can
reach a linear computational complexity against an exponential size increase of
document collection. In order to verify the authenticity of search results, a
structure called minimum hash sub-tree is designed in this paper. Experiments
have been conducted using the collection set built from the IEEE Xplore. The
results show that with a sharp increase of documents in the dataset the search
time of the proposed method increases linearly whereas the search time of the
traditional method increases exponentially. Furthermore, the proposed method
has an advantage over the traditional method in the rank privacy and relevance
of retrieved documents.
CC16NXT14 TITLE: Towards Building Forensics Enabled Cloud Through Secure Logging-as-a-
Service
ABSTRACT: Collection and analysis of various logs (e.g., process logs, network
logs) are fundamental activities in computer forensics. Ensuring the security of
the activity logs is therefore crucial to ensure reliable forensics investigations.
However, because of the black-box nature of clouds and the volatility and co-
mingling of cloud data, providing the cloud logs to investigators while preserving
users' privacy and the integrity of logs is challenging. The current secure logging
schemes, which consider the logger as trusted cannot be applied in clouds since
there is a chance that cloud providers (logger) collude with malicious users or
investigators to alter the logs. In this paper, we analyze the threats on cloud
users' activity logs considering the collusion between cloud users, providers, and
investigators. Based on the threat model, we propose Secure-Logging-as-a-
Service ( SecLaaS), which preserves various logs generated for the activity of
virtual machines running in clouds and ensures the confidentiality and integrity
of such logs. Investigators or the court authority can only access these logs by
the RESTful APIs provided by SecLaaS, which ensures confidentiality of logs. The
integrity of the logs is ensured by hash-chain scheme and proofs of past logs
published periodically by the cloud providers. In prior research, we used two
accumulator schemes Bloom filter and RSA accumulator to build the proofs of
past logs. In this paper, we propose a new accumulator scheme - Bloom-Tree,
which performs better than the other two accumulators in terms of time and
space requirement.
CC16NXT15 TITLE: A Genetic Algorithm for Virtual Machine Migration in Heterogeneous
Mobile Cloud Computing
ABSTRACT: Mobile Cloud Computing (MCC) improves the performance of a
mobile application by executing it at a resourceful cloud server that can minimize
execution time compared to a resource-constrained mobile device. Virtual
Machine (VM) migration in MCC brings cloud resources closer to a user so as to
further minimize the response time of an offloaded application. Such resource
migration is very effective for interactive and real-time applications. However,
the key challenge is to find an optimal cloud server for migration that offers the
maximum reduction in computation time. In this paper, we propose a Genetic
Algorithm (GA) based VM migration model, namely GAVMM, for heterogeneous
MCC system. In GAVMM, we take user mobility and load of the cloud servers
into consideration to optimize the effectiveness of VM migration. The goal of
GAVMM is to select the optimal cloud server for a mobile VM and to minimize
the total number of VM migrations, resulting in a reduced task execution time.
Additionally, we present a thorough numerical evaluation to investigate the
effectiveness of our proposed model compared to the state-of-the-art VM
migration policies.
CC16NXT16 TITLE: Resource Scheduling Under Concave Pricing for Cloud Computing
ABSTRACT: With the booming growth of cloud computing industry,
computational resources are readily and elastically available to the customers. In
order to attract customers with various demands, most Infrastructure-as-a-
service (IaaS) cloud service providers offer several pricing strategies such as pay
as you go, pay less per unit when you use more (so called volume discount), and
pay even less when you reserve. The diverse pricing schemes among different
IaaS service providers or even in the same provider form a complex economic
landscape that nurtures the market of cloud brokers. By strategically scheduling
multiple customers' resource requests, a cloud broker can fully take advantage
of the discounts offered by cloud service providers. In this paper, we focus on
how a broker may help a group of customers to fully utilize the volume discount
pricing strategy offered by cloud service providers through cost-efficient online
resource scheduling. We present a randomized online stack-centric scheduling
algorithm (ROSA) and theoretically prove the lower bound of its competitive
ratio. Our simulation shows that ROSA achieves a competitive ratio close to the
theoretical lower bound under a special case cost function. Trace driven
simulation using Google cluster data demonstrates that ROSA is superior to the
conventional online scheduling algorithms in terms of cost saving.
CC16NXT17 TITLE: Dynamic Bin Packing for On-Demand Cloud Resource Allocation
ABSTRACT: Dynamic Bin Packing (DBP) is a variant of classical bin packing, which
assumes that items may arrive and depart at arbitrary times. Existing works on
DBP generally aim to minimize the maximum number of bins ever used in the
packing. In this paper, we consider a new version of the DBP problem, namely,
the MinTotal DBP problem which targets at minimizing the total cost of the bins
used overtime. It is motivated by the request dispatching problem arising from
cloud gaming systems. We analyze the competitive ratios of the modified
versions of the commonly used First Fit, Best Fit, and Any Fit packing (the family
of packing algorithms that open a new bin only when no currently open bin can
accommodate the item to be packed) algorithms for the Min Total DBP problem.
We show that the competitive ratio of Any Fit packing cannot be better than μ +
1, where μ is the ratio of the maximum item duration to the minimum item
duration. The competitive ratio of Best Fit packing is not bounded for any given
μ. For First Fit packing, if all the item sizes are smaller than 1/β of the bin
capacity (β> 1 is a constant), the competitive ratio has an upper bound of β/β-
1·μ+3β/β-1 + 1. For the general case, the competitive ratio of First Fit packing
has an upper bound of 2μ + 7. We also propose a Hybrid First Fit packing
algorithm that can achieve a competitive ratio no larger than 5/4 μ + 19/4 when
μ is not known and can achieve a competitive ratio no larger than μ + 5 when μ is
known.
CC16NXT18 TITLE: A Scalable Data Chunk Similarity based Compression Approach for
Efficient Big Sensing Data Processing on Cloud
ABSTRACT: Big sensing data is prevalent in both industry and scientific research
applications where the data is generated with high volume and velocity. Cloud
computing provides a promising platform for big sensing data processing and
storage as it provides a flexible stack of massive computing, storage, and
software services in a scalable manner. Current big sensing data processing on
Cloud have adopted some data compression techniques. However, due to the
high volume and velocity of big sensing data, traditional data compression
techniques lack sufficient efficiency and scalability for data processing. Based on
specific on-Cloud data compression requirements, we propose a novel scalable
data compression approach based on calculating similarity among the
partitioned data chunks. Instead of compressing basic data units, the
compression will be conducted over partitioned data chunks. To restore original
data sets, some restoration functions and predictions will be designed. Map
Reduce is used for algorithm implementation to achieve extra scalability on
Cloud. With real world meteorological big sensing data experiments on U-Cloud
platform, we demonstrate that the proposed scalable compression approach
based on data chunk similarity can significantly improve data compression
efficiency with affordable data accuracy loss.
CC16NXT19 TITLE: A Secure and Dynamic Multi-Keyword Ranked Search Scheme over
Encrypted Cloud Data
ABSTRACT: Due to the increasing popularity of cloud computing, more and more
data owners are motivated to outsource their data to cloud servers for great
convenience and reduced cost in data management. However, sensitive data
should be encrypted before outsourcing for privacy requirements, which
obsoletes data utilization like keyword-based document retrieval. In this paper,
we present a secure multi-keyword ranked search scheme over encrypted cloud
data, which simultaneously supports dynamic update operations like deletion
and insertion of documents. Specifically, the vector space model and the widely-
used TF x IDF model are combined in the index construction and query
generation. We construct a special tree-based index structure and propose a
“Greedy Depth-first Search” algorithm to provide efficient multi-keyword ranked
search. The secure kNN algorithm is utilized to encrypt the index and query
vectors, and meanwhile ensure accurate relevance score calculation between
encrypted index and query vectors. In order to resist statistical attacks, phantom
terms are added to the index vector for blinding search results. Due to the use of
our special tree-based index structure, the proposed scheme can achieve sub-
linear search time and deal with the deletion and insertion of documents
flexibly.
CC16NXT20 TITLE: A Secure Anti-Collusion Data Sharing Scheme for Dynamic Groups in the
Cloud
ABSTRACT: Benefited from cloud computing, users can achieve an effective and
economical approach for data sharing among group members in the cloud with
the characters of low maintenance and little management cost. Meanwhile, we
must provide security guarantees for the sharing data files since they are
outsourced. Unfortunately, because of the frequent change of the membership,
sharing data while providing privacy-preserving is still a challenging issue,
especially for an untrusted cloud due to the collusion attack. Moreover, for
existing schemes, the security of key distribution is based on the secure
communication channel, however, to have such channel is a strong assumption
and is difficult for practice. In this paper, we propose a secure data sharing
scheme for dynamic members. First, we propose a secure way for key
distribution without any secure communication channels, and the users can
securely obtain their private keys from group manager. Second, our scheme can
achieve fine-grained access control, any user in the group can use the source in
the cloud and revoked users cannot access the cloud again after they are
revoked. Third, we can protect the scheme from collusion attack, which means
that revoked users cannot get the original data file even if they conspire with the
untrusted cloud. In our approach, by leveraging polynomial function, we can
achieve a secure user revocation scheme. Finally, our scheme can achieve fine
efficiency, which means previous users need not to update their private keys for
the situation either a new user joins in the group or a user is revoked from the
group.
CC16NXT21 TITLE: CDA Generation and Integration for Health Information Exchange Based
on Cloud Computing System
ABSTRACT: In the cloud, for achieving access control and keeping data
confidential, the data owners could adopt attribute-based encryption to encrypt
the stored data. Users with limited computing power are however more likely to
delegate the mask of the decryption task to the cloud servers to reduce the
computing cost. As a result, attribute-based encryption with delegation emerges.
Still, there are caveats and questions remaining in the previous relevant works.
For instance, during the delegation, the cloud servers could tamper or replace
the delegated cipher text and respond a forged computing result with malicious
intent. They may also cheat the eligible users by responding them that they are
ineligible for the purpose of cost saving. Furthermore, during the encryption, the
access policies may not be flexible enough as well. Since policy for general
circuits enables to achieve the strongest form of access control, a construction
for realizing circuit cipher text-policy attribute-based hybrid encryption with
verifiable delegation has been considered in our work. In such a system,
combined with verifiable computation and encrypt-then-mac mechanism, the
data confidentiality, the fine-grained access control and the correctness of the
delegated computing results are well guaranteed at the same time. Besides, our
scheme achieves security against chosen-plaintext attacks under the k-multi
linear Decisional Diffie-Hellman assumption. Moreover, an extensive simulation
campaign confirms the feasibility and efficiency of the proposed solution.
CC16NXT23 TITLE: CloudArmor: Supporting Reputation-Based Trust Management for Cloud
Services
ABSTRACT: Trust management is one of the most challenging issues for the
adoption and growth of cloud computing. The highly dynamic, distributed, and
non-transparent nature of cloud services introduces several challenging issues
such as privacy, security, and availability. Preserving consumers' privacy is not an
easy task due to the sensitive information involved in the interactions between
consumers and the trust management service. Protecting cloud services against
their malicious users (e.g., such users might give misleading feedback to
disadvantage a particular cloud service) is a difficult problem. Guaranteeing the
availability of the trust management service is another significant challenge
because of the dynamic nature of cloud environments. In this article, we
describe the design and implementation of Cloud Armor, a reputation-based
trust management framework that provides a set of functionalities to deliver
trust as a service (TaaS), which includes i) a novel protocol to prove the
credibility of trust feedbacks and preserve users' privacy, ii) an adaptive and
robust credibility model for measuring the credibility of trust feedbacks to
protect cloud services from malicious users and to compare the trustworthiness
of cloud services, and iii) an availability model to manage the availability of the
decentralized implementation of the trust management service. The feasibility
and benefits of our approach have been validated by a prototype and
experimental studies using a collection of real-world trust feedbacks on cloud
services.
CC16NXT24 TITLE: Conditional Identity-Based Broadcast Proxy Re-Encryption and Its
Application to Cloud Email
ABSTRACT: Recently, a number of extended Proxy Re-Encryptions (PRE), e.g.
Conditional (CPRE), identity-based PRE (IPRE) and broadcast PRE (BPRE), have
been proposed for flexible applications. By incorporating CPRE, IPRE and BPRE,
this paper proposes a versatile primitive referred to as conditional identity-based
broadcast PRE (CIBPRE) and formalizes its semantic security. CIBPRE allows a
sender to encrypt a message to multiple receivers by specifying these receivers'
identities, and the sender can delegate a re-encryption key to a proxy so that he
can convert the initial cipher text into a new one to a new set of intended
receivers. Moreover, the re-encryption key can be associated with a condition
such that only the matching cipher texts can be re-encrypted, which allows the
original sender to enforce access control over his remote cipher texts in a fine-
grained manner. We propose an efficient CIBPRE scheme with provable security.
In the instantiated scheme, the initial cipher text, the re-encrypted cipher text
and the re-encryption key are all in constant size, and the parameters to
generate a re-encryption key are independent of the original receivers of any
initial cipher text. Finally, we show an application of our CIBPRE to secure cloud
email system advantageous over existing secure email systems based on Pretty
Good Privacy protocol or identity-based encryption.
CC16NXT25 TITLE: Federated Campus Cloud Colombian Initiative
ABSTRACT: Desktop cloud paradigm arises from combining cloud computing
with volunteer computing systems in order to harvest the idle computational
resources of volunteers' computers Students usually underuse university
computer rooms. As a result, a desktop cloud can be seen as a form of high
performance computing (HPC) at a low cost. When the capacity of a desktop
cloud is insufficient to execute a HPC project, a new opportunity for collaborative
work among universities appears, resulting in a federation of desktop cloud
systems to create a significant amount of virtual resources from multiple
providers on non-dedicated infrastructure. Even though cloud federation
generates research activity today, neither interoperability among several
implementations of cloud computing nor the federation of desktop clouds are
resolved issues. Therefore, our initiative is related to gathering the existing and
idle computer resources provided by the universities that take part to form a
cloud federation on non-dedicated infrastructure.
MOBILE COMPUTING
MC16NXT01 TITLE: Grasping Popular Applications in Cellular Networks with Big Data
Analytics Platforms
ABSTRACT: Internet access through cellular networks is rapidly growing, driven
by the great success of the mobile apps paradigm and the overwhelming
popularity of social-related multimedia services such as YouTube, Facebook, or
even WhatsApp. Understanding the functioning, performance and traffic
generated by these applications is paramount for ISPs, especially for cellular
operators, who must manage the huge surge of volume and number of users
with the constraints and challenges of cellular networks. In this paper we study
important networking aspects of three popular applications in cellular networks:
YouTube, Facebook and WhatsApp. Our evaluations span the Content Delivery
Networks (CDNs) hosting these services, their traffic characteristics, and their
performance. The analysis is performed on top of real cellular network traffic
monitored at the nationwide cellular network of a major European ISP. Due to
privacy issues and given the huge amount of data generated by these
applications as well as the large number of monitored customers, the analysis
has been done in an online fashion, using a customized Big Data Analytics (BDA)
platform called DB Stream. We overview DB Stream and discuss other potential
solutions currently available for traffic monitoring and analysis of big networking
data. To the best of our knowledge, this is the first paper providing a complete
analysis of popular services in cellular networks, using BDA platforms.
MC16NXT02 TITLE: Computing with Nearby Mobile Devices: a Work Sharing Algorithm for
Mobile Edge-Clouds
ABSTRACT: As mobile devices evolve to be powerful and pervasive computing
tools, their usage also continues to increase rapidly. However, mobile device
users frequently experience problems when running intensive applications on
the device itself, or offloading to remote clouds, due to resource shortage and
connectivity issues. Ironically, most users’ environments are saturated with
devices with significant computational resources. This paper argues that nearby
mobile devices can efficiently be utilized as a crowd-powered resource cloud to
complement the remote clouds. Node heterogeneity, unknown worker
capability, and dynamism are identified as essential challenges to be addressed
when scheduling work among nearby mobile devices. We present a work sharing
model, called Honeybee, using an adaptation of the well-known work stealing
method to load balance independent jobs among heterogeneous mobile nodes,
able to accommodate nodes randomly leaving and joining the system. The
overall strategy of Honeybee is to focus on short-term goals, taking advantage of
opportunities as they arise, based on the concepts of proactive workers and
opportunistic delegator. We evaluate our model using a prototype framework
built using Android and implement two applications. We report speedups of up
to 4 with seven devices and energy savings up to 71% with eight devices.
MC16NXT03 TITLE: Fake Mask: A Novel Privacy Preserving Approach for Smart phones
ABSTRACT: Users can enjoy personalized services provided by various context-
aware applications that collect users' contexts through sensor-equipped smart
phones. Meanwhile, serious privacy concerns arise due to the lack of privacy
preservation mechanisms. Currently, most mechanisms apply passive defense
policies in which the released contexts from a privacy preservation system are
always real, leading to a great probability with which an adversary infers the
hidden sensitive contexts about the users. In this paper, we apply a deception
policy for privacy preservation and present a novel technique, Fake Mask, in
which fake contexts may be released to provably preserve users' privacy. The
output sequence of contexts by Fake Mask can be accessed by the un trusted
context-aware applications or be used to answer queries from those
applications. Since the output contexts may be different from the original
contexts, an adversary has greater difficulty in inferring the real contexts.
Therefore, Fake Mask limits what adversaries can learn from the output
sequence of contexts about the user being in sensitive contexts, even if the
adversaries are powerful enough to have the knowledge about the system and
the temporal correlations among the contexts. The essence of Fake Mask is a
privacy checking algorithm which decides whether to release a fake context for
the current context of the user. We present a novel privacy checking algorithm
and an efficient one to accelerate the privacy checking process. Extensive
evaluation experiments on real smart phone context traces of users demonstrate
the improved performance of Fake Mask over other approaches.
MC16NXT04 TITLE: Understanding Smartphone Sensor and App Data for Enhancing the
Security of Secret Questions
ABSTRACT: Many web applications provide secondary authentication methods,
i.e., secret questions (or password recovery questions), to reset the account
password when a user’s login fails. However, the answers to many such secret
questions can be easily guessed by an acquaintance or exposed to a stranger
that has access to public online tools (e.g., online social networks); moreover, a
user may forget her/his answers long after creating the secret questions. Today’s
prevalence of smart phones has granted us new opportunities to observe and
understand how the personal data collected by smart phone sensors and apps
can help create personalized secret questions without violating the users’ privacy
concerns. In this paper, we present a Secret-Question based Authentication
system, called “Secret-QA”, that creates a set of secret questions on basic of
people’s smart phone usage. We develop a prototype on Android smart phones,
and evaluate the security of the secret questions by asking the
acquaintance/stranger who participate in our user study to guess the answers
with and without the help of online tools; meanwhile, we observe the questions’
reliability by asking participants to answer their own questions. Our
experimental results reveal that the secret questions related to motion sensors,
calendar, app installment, and part of legacy app usage history (e.g., phone calls)
have the best memo ability for users as well as the highest robustness to attacks.
MC16NXT05 TITLE: FLANDROID: Energy-Efficient Recommendations of Reliable Context
Providers for Android Applications
ABSTRACT: Mobile applications are becoming more and more popular with the
prevalence of mobile operating systems and mobile Internet. Many of them
consume services provided by the underlying infrastructure and platforms as a
part of their application environmental contexts. However, application failures or
downgrade in performance may be the results due to inadequate provisions of
these environmental issues in the implementations of the mobile applications. In
this paper, we propose a framework to enable mobile applications to consume
services offered by a reliable context provider with high probability in run time.
We report a case study on a suite of five real-world mobile applications with 74
real faults on real mobile phones involving 50 users. The results of the case study
show that our framework can significantly improve the reliability of mobile
applications with respect to the failures due to buggy-context-provider faults
with low slowdown and energy overheads.
MC16NXT06 TITLE: Towards a Fully Cloudified Mobile Network Infrastructure
ABSTRACT: Cloud computing enables the on-demand delivery of resources for a
multitude of services and gives the opportunity for small agile companies to
compete with large industries. In the telco world, cloud computing is currently
mostly used by Mobile Network Operators (MNO) for hosting non-critical
support services and selling cloud services such as applications and data storage.
MNOs are investigating the use of cloud computing to deliver key
telecommunication services in the access and core networks. Without this,
MNOs lose the opportunities of both combining this with over-the-top (OTT) and
value-added services to their fundamental service offerings and leveraging cost-
effective commodity hardware. Being able to leverage cloud computing
technology effectively for the telco world is the focus of Mobile Cloud
Networking (MCN) This paper presents the key results of MCN integrated project
that includes its architecture advancements, prototype implementation and
evaluation. Results show the efficiency and the simplicity that a MNO can deploy
and manage the complete service lifecycle of fully cloudified, composed services
that combine OTT/IT- and mobile-network-based services running on commodity
hardware. The extensive performance evaluation of MCN using two key Proof-
of-Concept scenarios that compose together many services to deliver novel
converged elastic, on demand mobile-based but innovative OTT services proves
the feasibility of such fully virtualized deployments. Results show that it is
beneficial to extend cloud computing to telco usage and run fully cloudified
mobile-network-based systems with clear advantages and new service
opportunities for MNOs and end users.
MC16NXT07 TITLE: Rapid Localization and Extraction of Street Light Poles in Mobile LiDAR
Point Clouds: A Supervoxel-Based Approach
ABSTRACT: This paper presents a supervoxel-based approach for automated
localization and extraction of street light poles in point clouds acquired by a
mobile LiDAR system. The method consists of five steps: preprocessing,
localization, segmentation, feature extraction, and classification. First, the raw
point clouds are divided into segments along the trajectory, the ground points
are removed, and the remaining points are segmented into supervoxels. Then, a
robust localization method is proposed to accurately identify the pole-like
objects. Next, a localization-guided segmentation method is proposed to obtain
pole-like objects. Subsequently, the pole features are classified using the support
vector machine and random forests. The proposed approach was evaluated on
three datasets with 1,055 street light poles and 701 million points. Experimental
results show that our localization method achieved an average recall value of
98.8%. A comparative study proved that our method is more robust and efficient
than other existing methods for localization and extraction of street light poles.
MC16NXT08 TITLE: Rethinking Permission Enforcement Mechanism on Mobile Systems
ABSTRACT: To protect sensitive resources from unauthorized use, modern
mobile systems, such as Android and iOS, design a permission-based access
control model. However, current model could not enforce fine-grained control
over the dynamic permission use contexts, causing two severe security
problems. First, any code package in an application could use the granted
permissions, inducing attackers to embed malicious payloads into benign apps.
Second, the permissions granted to a benign application may be utilized by an
attacker through vulnerable application interactions. Although ad hoc solutions
have been proposed, none could systematically solve these two issues within a
unified framework. This paper presents the first such framework to provide
context-sensitive permission enforcement that regulates permission use policies
according to system-wide application contexts, which cover both intra-
application context and inter-application context. We build a prototype system
on Android, named Fine Droid, to track such context during the application
execution. To flexibly regulate the context-sensitive permission rules, Fine Droid
features a policy framework that could express generic application contexts. We
demonstrate the benefits of Fine Droid by instantiating several security
extensions based on the policy framework, for three potential users: end users,
administrators, and developers. Furthermore, Fine Droid is showed to introduce
a minor overhead.
MC16NXT09 TITLE: Heating Dispersal for Self-Healing NAND Flash Memory
ABSTRACT: Substantially reduced lifetimes are becoming a critical issue in NAND
flash memory with the advent of multi-level cell and triple-level cell flash
memory. Researchers discovered that heating can cause worn-out NAND flash
cells to become reusable and greatly extend the lifetime of flash memory cells.
However, the heating process consumes a substantial amount of power, and
some fundamental changes are required for existing NAND flash management
techniques. In particular, all existing wear-leveling techniques are based on the
principle of evenly distributing writes and erases. For self-healing NAND flash,
this may cause NAND flash cells to be worn out in a short period of time.
Moreover, frequently healing these cells may drain the energy quickly in battery
driven mobile devices, which is defined as the concentrated heating problem. In
this paper, we propose a novel wear-leveling scheme called DHeating (Dispersed
Heating) to address the problem. In DHeating, rather than evenly distributing
writes and erases over a time period, write and erase operations are scheduled
on a small number of flash memory cells at a time, so that these cells can be
worn out and healed much earlier than other cells. In this way, we can avoid
quick energy depletion caused by concentrated heating. In addition, the heating
process takes several seconds and has become the new performance bottleneck.
In order to address this issue, we propose a lazy heating repair scheme. The lazy
heating repair scheme can ease the long time delays caused by the heating via
delaying the heating operation and using the system idle time to repair.
Furthermore, the flash memory’s reliability becomes worse with the flash
memory cells reaching the excepted worn out time. We propose an early heating
strategy to solve the reliability problem. With the extended lifetime provided by
self-healing, we can trade some lifetimes for reliability. The idea is to start the
healing process earlier than the expected worn-out - ime. We evaluate our
scheme based on an embedded platform. The experimental results show that
the proposed scheme can effectively prolong the consecutive heating time
interval, alleviate the long time delays caused by the heating, and enhance the
reliability for self-healing flash memory.
MC16NXT10 TITLE: Energy-Efficient Target Tracking by Mobile Sensors with Limited Sensing
Range
ABSTRACT: The recent advancements of technology in robotics and wireless
communication have enabled the low-cost and large-scale deployment of mobile
sensor nodes for target tracking which is a critical application scenario of
wireless sensor networks. Due to the constraints of limited sensing range, it is of
great importance to design node coordination mechanism for reliable tracking so
that at least the target can always be detected with a high probability while the
total network energy cost can be reduced for longer network lifetime. In this
paper, we deal with this problem considering both the unreliable wireless
channel and the network energy constraint. We transfer the original problem
into a dynamic coverage problem, and decompose it into two sub problems. By
exploiting the online estimate of target location, we first decide the locations
where the mobile nodes should move into so that the reliable tracking can be
guaranteed. Then, we assign different nodes to each location in order that the
total energy cost in terms of moving distance can be minimized. Extensive
simulations under various system settings are employed to evaluate the
effectiveness of our solution.
MC16NXT11 TITLE: Fusion of Modified Bat Algorithm Soft Computing and Dynamic Model
Hard Computing to Online Self-Adaptive Fuzzy Control of Autonomous Mobile
Robots
ABSTRACT: This paper presents a fusion methodology of modified bat algorithm
(BA) soft computing and dynamic model hard computing to online self-adaptive
fuzzy control of autonomous mobile robots in a field-programmable gate array
(FPGA) chip. This fusion approach, called fusion of BA fuzzy control (FBAFC),
gains the benefits of BA, online fuzzy control, dynamic model, Taguchi method,
and FPGA technique. The FPGA realization of the proposed FBAFC fusion method
is more effective in practice for challenging real-world embedded applications by
using system-on-a-programmable-chip (SOPC) technology. Experimental results
and comparative works are conducted to illustrate the merits of the proposed
SOPC-based FBAFC optimal controller over other existing methods for mobile
robots.
MC16NXT12 TITLE: Characterizing and Modeling User Behavior in a Large-scale Mobile Live
Streaming System
ABSTRACT: In mobile live streaming systems, user are fairly limited in
interaction with the streaming objects due to the constraints coming from
mobile devices and event-driven nature of live content. The constraints could
lead to unique user behavior characteristics, which have yet to be explored. This
paper investigates over 9 million access logs collected from the PPTV live
streaming system, with an emphasis on the discrepancies that might exist when
users access the live streaming catalog from mobile and non-mobile terminals.
We observe a much higher likelihood of abandoning sessions by mobile users,
and examine the structure of abandoned sessions from the perspectives of time
of day, channel content and mobile device types. Surprisingly, we find relatively
low abandonment rates during peak-load time periods and a notable impact of
mobile device type (i.e. Android or iOS) on the abandonment behavior. To
further capture the intrinsic characteristics of user behavior, we develop a series
of models for session duration, user activity and time-dynamics of user
arrivals/departures. More importantly, we relate the model parameters to
physical and real-life meanings. The observations and models shed light on video
delivery system, telco-CDNs and mobile applications.
MC16NXT13 TITLE: Assessing the Implications of Cellular Network Performance on Mobile
Content Access
ABSTRACT: Mobile applications such as VoIP, (live) gaming, or video streaming
have diverse QOS requirements ranging from low delay to high throughput. The
optimization of the network quality experienced by end-users requires detailed
knowledge of the expected network performance. Also, the achieved service
quality is affected by a number of factors, including network operator and
available technologies. However, most studies measuring the cellular network do
not consider the performance implications of network configuration and
management. To this end, this paper reports about an extensive data set of
cellular network measurements, focused on analyzing root causes of mobile
network performance variability. Measurements conducted on a 4G cellular
network in Germany show that management and configuration decisions have a
substantial impact on the performance. Specifically, it is observed that the
association of mobile devices to a point of presence (POP) within the operator's
network can influence the end-to-end performance by a large extent. Given the
collected data, a model predicting the POP assignment and its resulting RTT
leveraging Markov chain and machine learning approaches is developed. RTT
increases of 58% to 73% compared to the optimum performance are observed in
more than 57% of the measurements. Measurements of the response and page
load times of popular websites lead to similar results, namely, a median increase
of 40% between the worst and the best performing POP.
MC16NXT14 TITLE: Magnetic Tensor Sensor and Way-finding Method based on Geomagnetic
Field Effects with Applications for Visually Impaired Users
ABSTRACT: This paper presents a method utilizing geomagnetic field effects
commonly found in nature to help the visually impaired persons (VIPs) navigate
safely and efficiently; both indoor and outdoor applications are considered.
Magnetic information indicating special locations can be incorporated as
waypoints on a map to provide a basis to help the user follow a map that
amalgamates the waypoints into spatial information. Along with a magnetic
tensor sensor (MTS), a navigation system for helping VIPs more effectively
comprehend their surroundings is presented. With the waypoint-enhanced map
and an improved dynamic time warping algorithm, this system estimates the
user’s locations from real-time measured magnetic data. Methods using image
data to enhance waypoints at dangerous locations are discussed. The MTS-
enhanced method can be integrated into existing personal mobile devices (with
built-in sound, image, video and vibration alert capabilities) to take advantages
of the rapidly developing internet, global positioning systems (GPS) and
computing technologies to overcome several shortcomings of blind-assistive
devices. A prototype MTS-enhanced system for indoor/outdoor navigation has
been developed and demonstrated experimentally. Although the MTS and
algorithm are presented in the context of way-finding for a VIP, the findings
presented here provide a basis for a wide range of applications where
geomagnetic field effects offer an advantage.
MC16NXT15 TITLE: Declarative Framework for Specification, Simulation and Analysis of
Distributed Applications
ABSTRACT: Researchers have recently shown that declarative database query
languages, such as Data log, could naturally be used to specify and implement
network protocols and services. In this paper, we present a declarative
framework for the specification, execution, simulation, and analysis of
distributed applications. Distributed applications, including routing protocols,
can be specified using a Declarative Networking language, called D2C, whose
semantics capture the notion of a Distributed State Machine (DSM), i.e., a
network of computational nodes that communicate with each other through the
exchange of data. The D2C specification can be directly executed using the DSM
computational infrastructure of our framework. The same specification can be
simulated and formally verified. The simulation component integrates the DSM
tool within a network simulation environment and allows developers to simulate
network dynamics and collect data about the execution in order to evaluate
application responses to network changes. The formal analysis component of our
framework, instead, complements the empirical testing by supporting the
verification of different classes of properties of distributed algorithms, including
convergence of network routing protocols. To demonstrate the generality of our
framework, we show how it can be used to analyze two classes of network
routing protocols, a path vector and a Mobile Ad-Hoc Network (MANET) routing
protocol, and execute a distributed algorithm for pattern formation in multi-
robot systems.
MC16NXT16 TITLE: Complexity Reduction by Modified Scale-Space Construction in SIFT
Generation Optimized for a Mobile GPU
ABSTRACT: Scale-invariant feature transform (SIFT) is one of the most widely
used local features for computer vision in mobile devices. A mobile GPU is often
used to run computer-vision applications using SIFT features, but the
performance in such a case is not powerful enough to generate SIFT features in
real time. This paper proposes an efficient scheme to optimize the SIFT algorithm
for a mobile GPU. It analyzes the conventional scale-space construction step in
the SIFT generation, finding that reducing the size of the Gaussian filter and the
scale-space image leads to a significant speed-up with only a slight degradation
of the quality of the features. Based on this observation, the SIFT algorithm is
modified and implemented for real-time execution. Additional optimization
techniques are employed for a further speed-up by efficiently utilizing both the
CPU and the GPU in a mobile processor. The proposed SIFT generation scheme
achieves a processing speed of 28.30 fps for an image with a resolution of
1280x720 running on a Galaxy S5 LTE-A device, thereby gaining a speed-up by
factors of 114.78 and 4.53 over CPU- and GPU-only implementations,
respectively.
MC16NXT17 TITLE: Experimental Evaluation of Impulsive Ultrasonic Intra-Body
Communications for Implantable Biomedical Devices
ABSTRACT: Biomedical systems of miniaturized implantable sensors and
actuators interconnected in an intra-body area network could enable
revolutionary clinical applications. Given the well understood limitations of radio
frequency (RF) propagation in the human body, in our previous work we
investigated the use of ultrasonic waves as an alternative physical carrier of
information, and proposed Ultrasonic Wideband (UsWB), an ultrasonic
multipath-resilient integrated physical and medium access control (MAC) layer
protocol. In this paper, we discuss the design and implementation of a software-
defined test bed architecture for ultrasonic intra-body area networks, and
propose the first experimental demonstration of the feasibility of ultrasonic
communications in tissue mimicking materials. We first discuss in detail our
FPGA-based prototype implementation of Us WB. We then demonstrate how the
prototype can flexibly trade performance off for power consumption, and
achieve, for bit error rates (BER) no higher than 10�6, either (i) high-data rate
transmissions up to 700kbit=s at a transmit power of -14dBm ( 40 W), or (ii) low-
data rate and lower-power transmissions down to -21dBm ( 8W) at 70kbit=s. We
demonstrate that the Us WB MAC protocol allows multiple transmitter-receiver
pairs to coexist and dynamically adapt the transmission rate according to
channel and interference conditions to maximize throughput while satisfying
predefined reliability constraints. We also show how Us WB can be used to
enable a video monitoring medical application for implantable devices. Finally,
we propose (and validate through experiments) a statistical model of small-scale
fading for the ultrasonic intra-body channel.
MC16NXT18 TITLE: On Energy-Efficient Offloading in Mobile Cloud for Real Time Video
Applications
ABSTRACT: Batteries of modern mobile devices remain severely limited in
capacity, which makes energy consumption a key concern for mobile
applications, particularly for the computation intensive video applications.
Mobile devices can save energy by offloading computation tasks to the cloud;
yet the energy gain must exceed the additional communication cost for cloud
migration to be beneficial. The situation is further complicated with real-time
video applications that have stringent delay and bandwidth constraints. In this
paper, we closely examine the performance and energy efficiency of
representative mobile cloud applications under dynamic wireless network
channels and state of- the-art mobile platforms. We identify the unique
challenges and opportunities for offloading real time video applications, and
develop a generic model for energy-efficient computation offloading accordingly
in this context. We propose a scheduling algorithm that makes adaptive
offloading decisions in fine granularity in dynamic wireless network conditions,
and verify its effectiveness through trace-driven simulations. We further present
case studies with advanced mobile platforms and practical applications to
demonstrate the superiority of our solution and the substantial gain of our
approach over baseline approaches.
MC16NXT19 TITLE: The Design of a Wearable RFID System for Real-time Activity Recognition
using Radio Patterns
ABSTRACT: Elderly care is one of the many applications supported by real-time
activity recognition systems. Traditional approaches use cameras, body sensor
networks, or radio patterns from various sources for activity recognition.
However, these approaches are limited due to ease-of-use, coverage, or privacy
preserving issues. In this paper, we present a novel wearable Radio Frequency
Identification (RFID) system aims at providing an easy-to-use solution with high
detection coverage. Our system uses passive tags which are maintenance-free
and can be embedded into the clothes to reduce the wearing and maintenance
efforts. A small RFID reader is also worn on the user’s body to extend the
detection coverage as the user moves. We exploit RFID radio patterns and
extract both spatial and temporal features to characterize various activities. We
also address the issues of false negative of tag readings and tag/antenna
calibration, and design a fast online recognition system. Antenna and tag
selection is done automatically to explore the minimum number of devices
required to achieve target accuracy. We develop a prototype system which
consists of a wearable RFID system and a smart phone to demonstrate the
working principles, and conduct experimental studies with four subjects over
two weeks. The results show that our system achieves a high recognition
accuracy of 93.6% with a latency of 5 seconds. Additionally, we show that the
system only requires two antennas and four tagged body parts to achieve a high
recognition accuracy of 85%.
MC16NXT20 TITLE: Drive Now, Text Later: Nonintrusive Texting-while-Driving Detection
using Smart phones
ABSTRACT: Texting-while-driving (T and D) is one of the top dangerous
behaviors for drivers. Many interesting systems and mobile phone applications
have been designed to help to detect or combat T and D. However, for a T and D
detection system to be practical, a key property is its capability to distinguish
driver’s mobile phone from passengers’. Existing solutions to this problem
generally rely on user’s manual input, or utilize specific localization devices to
determine whether a mobile phone is at driver’s location. In this paper, we
propose a method which is able to detect T and D automatically without using
any extra devices. The idea is very simple: when a user is composing messages,
the smart phone embedded sensors (i.e. gyroscopes, accelerometers, and GPS)
collect the associated information such as touch strokes, holding orientation and
vehicle speed. This information will then be analyzed to see whether there exists
some specific T and D patterns. Extensive experiments have been conducted by
different persons and in different driving scenarios. The results show that our
approach can achieve a good detection accuracy with low false positive rate.
Besides being infrastructure-free and with high accuracy, the method does not
access the content of messages and therefore is privacy-preserving.
MC16NXT21 TITLE: NoPSM: A Concurrent MAC Protocol over Low-data-rate Low-power
Wireless Channel without PRR-SINR Model
ABSTRACT: Concurrent MAC protocols can improve channel usage of wireless
sensor networks (WSNs), and provide a high-performance infrastructure for data
intensive applications. Most of existing concurrent MAC protocols are based on
proactively constructed physical interference models, i.e. PRR-SINR models
(PSM). However, it incurs relatively high bandwidth and energy overheads to
construct PSM for WSNs. In this paper, we propose NoPSM, which does not take
PSM as base to determine transmission concurrency. Instead, the base of NoPSM
is reactively constructed interference relationships by passively analyzing
overlapping relationships among time logs of block data transmissions and
corresponding reception status of each packet in blocks. In this way, NoPSM has
two salient features. Firstly, NoPSM is able to construct interference
relationships among nodes quickly and accurately along with block data
transmissions without needs of network downtime. Secondly, based on the
constructed interference relationships, NoPSM can make decisions of
transmission concurrency with a comprehensive criterion, which not only
estimates quality of any active links after initiating a new link, but also estimates
throughput improvement gained from concurrent transmissions. NoPSM has
been implemented in Tinyos-2.1 and extensively evaluated in TOSSIM.
Experimental results show that NoPSM improves system throughput by up to
60% compared with a traditional CSMA protocol, which cannot exploit potential
transmission concurrency. Moreover, NoPSM can gain up to 55% throughput
improvement as compared to an existing reactive concurrent MAC.
MC16NXT22 TITLE: MobiCoRE: Mobile Device based Cloudlet Resource Enhancement for
Optimal Task Response
ABSTRACT: Cloudlets are small self maintained clouds, with hotspot like
deployment, to enhance the computational capabilities of the mobile devices.
The limited resources of cloudlets can become heavily loaded during peak
utilization. Consequently, per user available computational capacity decreases
and at times mobile devices find no execution time benefit for using the cloudlet.
Researchers have proposed augmenting the cloudlet resources using mobile
devices; however, the proposed approaches do not consider the offered service
to load ratio while using mobile device resources. In this paper, we propose easy
to implement Mobile Device based Cloudlet Resource Enhancement (MobiCoRE)
while ensuring that: (i) mobile device always have time benefit for its tasks
submitted to the cloudlet and (ii) cloudlet induced mobile device load is a
fraction of its own service requirement from the cloudlet. We map MobiCoRE on
M/M/c/K queue and model the system using birth death markov chain. Given
the arrival rate of , c cpu cores in cloudlet, maximum tasks in the cloudlet to be K
and P0 = f(; c;K; ) be probability of having no user in cloudlet, we derive the
condition 1 P0 = K cK�cc!K 1000 for optimal average service time 1 of cloudlet
such that the mobile applications have maximum benefit for using cloudlet
services. We show that the optimal average service time is independent of the
applications service requirement. Evaluation shows that MobiCoRE can
accommodate up to 50% extra users when operating at optimal service time and
sharing mobile resources for remaining task, compared to completing the entire
user applications in cloudlet. Similarly, up to 47% time benefit can be achieved
for mobile devices by sharing only 16% computational resources with the
cloudlet.
MC16NXT23 TITLE: Mobile-Edge Computing: Partial Computation Offloading Using Dynamic
Voltage Scaling
ABSTRACT: The incorporation of dynamic voltage scaling (DVS) technology into
computation offloading offers more flexibilities for mobile edge computing. In
this paper, we investigate partial computation offloading by jointly optimizing
the computational speed of smart mobile device (SMD), transmit power of SMD,
and offloading ratio with two system design objectives: energy consumption of
SMD minimization (ECM) and latency of application execution minimization (LM).
Considering the case that the SMD is served by a single cloud server, we
formulate both the ECM problem and the LM problem as no convex problems.
To tackle the ECM problem, we recast it as a convex one with the variable
substitution technique and obtain its optimal solution. To address the no convex
and no smooth LM problem, we propose a locally optimal algorithm with the
univariate search technique. Furthermore, we extend the scenario to a multiple
cloud servers system, where the SMD could offload its computation to a set of
cloud servers. In this scenario, we obtain the optimal computation distribution
among cloud servers in closed form for the ECM and LM problem. Finally,
extensive simulations demonstrate that our proposed algorithms can
significantly reduce the energy consumption and shorten the latency with
respect to the existing offloading schemes.
MC16NXT24 TITLE: Understanding Requirements for Mobile Collaborative Applications in
Domains of Use
ABSTRACT: Several initiatives have implemented collaborative applications for
mobile settings as diverse as hospital work, wildlife, transportation, and
museums. The changing nature of mobile technology has resulted in a wide
variety of applications. We explored models, architectures, and applications
developed in the past 13 years to categorize the types of existing software and
extract a set of common core requirements that support mobile collaboration
independently of the current technology. This paper provides an analysis of the
domain of mobile collaborative systems including a proposal division into several
domains of use, and a study of the types of systems that exist in each of them. In
this way, developers can analyze their scenario of development to get an idea of
the most important requirements that should be considered for development.
MC16NXT25 TITLE: Iunius: A Cross Layer Peer-to-peer System with Device-to-device
Communications
ABSTRACT: Device-to-device (D2D) communications utilizing licensed spectrum
have been considered as a promising technology to improve cellular network
spectral efficiency and offload local traffic from cellular base stations (BSs). In
this paper, we develop Iunius: a peer-to-peer (P2P) system based on harvesting
data in a community utilizing multi-hop D2D communications. The Iunius system
optimizes D2D communications for P2P local file sharing, improves user
experience, and offloads traffic from the BSs. The Iunius system features cross-
layer integration of: 1) a wireless P2P protocol based on the Bit torrent protocol
in the application layer; 2) a simple centralized routing mechanism for multi-hop
D2D communications; 3) an interference cancellation technique for conventional
cellular (CC) uplink communications; and 4) a radio resource management
scheme to mitigate the interference between CC and D2D communications that
share the cellular uplink radio resources while maximizing the throughput of D2D
communications. Simulation results show that the proposed Iunius system can
increase the cellular spectral efficiency, reduce the traffic load of BSs, and
improve the data rate and energy saving for mobile users.
NETWORK SECURITY
NS16NXT01 TITLE: Contributory Broadcast Encryption with Efficient Encryption and Short
Ciphertexts
ABSTRACT: Broadcast encryption (BE) schemes allow a sender to securely
broadcast to any subset of members but require a trusted party to distribute
decryption keys. Group key agreement (GKA) protocols enable a group of
members to negotiate a common encryption key via open networks so that only
the group members can decrypt the cipher texts encrypted under the shared
encryption key, but a sender cannot exclude any particular member from
decrypting the cipher texts. In this paper, we bridge these two notions with a
hybrid primitive referred to as contributory broadcast encryption (ConBE). In this
new primitive, a group of members negotiate a common public encryption key
while each member holds a decryption key. A sender seeing the public group
encryption key can limit the decryption to a subset of members of his choice.
Following this model, we propose a ConBE scheme with short cipher texts. The
scheme is proven to be fully collusion.
ABSTRACT: This paper considers the precoder design for multiantenna secure
cognitive radio networks. We use finite-alphabet inputs as the signaling and
exploit statistical channel state information (CSI) at the transmitter. We
maximize the secrecy rate of the secondary user and control the transmit power
and the power leakage to the primary receivers that share the same frequency
spectrum. The secrecy rate maximization is important for practical systems, but
challenging to solve, mainly due to two reasons. First, the secrecy rate with
statistical CSI is computationally prohibitive to evaluate. Second, the
optimization over the precoder is a nondeterministic polynomial-time hard (NP-
hard) problem. We utilize an accurate approximation of the secrecy rate to
reduce the computational effort and then propose a global optimization
approach based on branch-and-bound method. The idea is to define a simplex
and transform the secrecy rate into a concave function. The derived concave
function converges to the secrecy rate when the defined simplex shrinks down.
Using this feature, we solve a sequence of concave maximization problems over
iteratively shrinking simplices and eventually attain the globally optimal solution
that maximizes the approximation of the secrecy rate. When the complexity is
concerned, a low-complexity variant with limited number of iterations can be
used in practice. We demonstrate the performance gains when compared with
others through numerical examples.
NS16NXT08 TITLE: Secure Data Aggregation with Homomorphic Primitives in Wireless
Sensor Networks: A Critical Survey and Open Research Issues
ABSTRACT: With 20 million installs a day , third-party apps are a major reason for
the popularity and addictiveness of Facebook. Unfortunately, hackers have
realized the potential of using apps for spreading malware and spam. The
problem is already significant, as we find that at least 13% of apps in our dataset
are malicious. So far, the research community has focused on detecting
malicious posts and campaigns. In this paper, we ask the question: Given a
Facebook application, can we determine if it is malicious? Our key contribution is
in developing FRAppE-Facebook's Rigorous Application Evaluator-arguably the
first tool focused on detecting malicious apps on Facebook. To develop FRAppE,
we use information gathered by observing the posting behavior of 111K
Facebook apps seen across 2.2 million users on Facebook. First, we identify a set
of features that help us distinguish malicious apps from benign ones. For
example, we find that malicious apps often share names with other apps, and
they typically request fewer permissions than benign apps. Second, leveraging
these distinguishing features, we show that FRAppE can detect malicious apps
with 99.5% accuracy, with no false positives and a high true positive rate
(95.9%). Finally, we explore the ecosystem of malicious Facebook apps and
identify mechanisms that these apps use to propagate. Interestingly, we find
that many apps collude and support each other; in our dataset, we find 1584
apps enabling the viral propagation of 3723 other apps through their posts. Long
term, we see FRAppE as a step toward creating an independent watchdog for
app assessment and ranking, so as to warn Facebook users before installing
apps.
NS16NXT19 TITLE: Carrier Aggregation Between Operators in Next Generation Cellular
Networks: A Stable Roommate Market
NS16NXT25 TITLE: Privacy and security problems of national health data warehouse: a
convenient solution for developing countries
SOFTWARE ENGINEERING
SE16NXT01 TITLE: A Tool-Supported Methodology for Validation and Refinement of Early-
Stage Domain Models
ABSTRACT: In experiments with crossover design subjects apply more than one
treatment. Crossover designs are widespread in software engineering
experimentation: they require fewer subjects and control the variability among
subjects. However, some researchers disapprove of crossover designs. The main
criticisms are: the carryover threat and its troublesome analysis. Carryover is the
persistence of the effect of one treatment when another treatment is applied
later. It may invalidate the results of an experiment. Additionally, crossover
designs are often not properly designed and/or analysed, limiting the validity of
the results. In this paper, we aim to make SE researchers aware of the perils of
crossover experiments and provide risk avoidance good practices. We study how
another discipline (medicine) runs crossover experiments. We review the SE
literature and discuss which good practices tend not to be adhered to, giving
advice on how they should be applied in SE experiments. We illustrate the
concepts discussed analysing a crossover experiment that we have run. We
conclude that crossover experiments can yield valid results, provided they are
properly designed and analysed, and that, if correctly addressed, carryover is no
worse than other validity threats.
SE16NXT17 TITLE: GoPrime: A Fully Decentralized Middleware for Utility-Aware Service
Assembly
SE16NXT21 TITLE: Exploring topic models in software engineering data analysis: A survey
SE16NXT22 TITLE: Automatic support for formal specification construction using pattern
knowledge
ABSTRACT: Summary form only given. Service science research has been
developing rapidly in this decade and provided a lot of contribution to enhance
value of information systems. Most of IT engineers and scientists are promoting
researches based on the concept what information systems make an effective,
an optimal, and comfortable society. Since the e-services are provided to users,
we need to define their values on user-friendliness, sustainability, economical
efficiencies, effectiveness, satisfaction, and some other viewpoints. In this talk, I
introduce the service fit model to realize what the users utilities are enhanced by
e-services. The former half, we discuss the irrational issues of human activity and
thinking. Even though some splendid services are provided by the information
systems, people sometimes do not think the service is valuable because their
thinking and services are not matched. In latter half, I introduce some examples
and actual cases where the systems provide smart services and functions. IT
solutions in tourism industry and semiconductors industry are introduced in end
of this talk.
ABSTRACT: The unyielding trend of increasing cyber threats has made cyber
security paramount in protecting personal and private intellectual property. In
order to provide the most highly secured network environment, network traffic
monitoring and threat detection systems must handle real-time data from varied
and branching places in enterprise networks. Though numerous investigations
have yielded real-time threat detection systems, in this paper we addressed the
issue of handling the large volumes of network traffic data of enterprise systems,
while simultaneously providing real-time monitoring and detection remain
unsolved. Particularly, we introduced and evaluated a streaming-based threat
detection system that can rapidly analyze highly intensive network traffic data in
real-time, utilizing the streaming-based clustering algorithms to detect abnormal
network activities. The developed system integrates the streaming and high-
performance data analysis capabilities of Flume, Sharp, and Hadoop into a cloud-
computing environment to provide network monitoring and intrusion detection.
Our performance evaluation and experimental results demonstrate that the
developed system can cope with a significant volume streaming data with high
detection accuracy and good system performance.
PDS16NXT01 TITLE: Cost-Effective Request Scheduling for Greening Cloud Data Centers
ABSTRACT: This paper applies the hybrid parallel model that combines both
shared and distributed memory architectures to improve the performance of the
Smith waterman algorithm (SW). The hybrid model uses both MPI and OpenMp
as programming techniques for different memory architectures. Our improved
implementation executes a parallel version of SW algorithm with a row wise
computation of the alignment matrix, which mainly optimizes the memory
usage. We applied the parallel SW implementation and tested the system
scalability on a homogenous cluster of up to eight nodes each of twenty four
cores. We used the SWISS-PROT protein knowledgebase to test our
implementation which achieved a tremendous reduction in the running time
using the Hybrid MPI-OpenMP over the OpenMP and sequential
implementations. The Hybrid MPI-OpenMP achieved a speed up of 14X and 50X
over the OpenMP and sequential implementations respectively when tested
against all the SWISS-PROT protein knowledgebase entries.
ABSTRACT: Utility side consumers are in the remote areas who are not linked
by the central electrical grid network, hybrid systems such as
Photovoltaic/Microturbine have been considered as the most reliable, desired
and attractive unconventional source of power supply. This documents the
analysis and simulation of a Photovoltaic/Microturbine hybrid system using
MATLAB/SIMULINK, simpower system block sets as its simulation software. The
system is designed entirely based on the concept of a parallel hybrid
configuration. The software modeling of a Photovoltaic/Microturbine system
provides an in-depth understanding of the system operation before building the
actual system and also the testing and experiments of system operation under
disturbances is not possible on the actual system.
PDS16NXT06 TITLE: Transient voltage and frequency stability of an isolated microgrid based
on energy storage systems
ABSTRACT: Microgrid (MG) based on energy storage systems to share the load
between distributed generation plants in operation mode is the main issue in
MG. Stability is an important component in energy management and planning of
MG especially in duration of system's operation such as fault occurrence in
system. In this article, frequency and voltage control based on active and
reactive power control methods are analyzed. Studies demonstrate that MG
stable operation in cases of proper use of control strategies is existing. MG
system control due to stability improvement in islanding mode (autonomous
mode) after fault occurrence at upstream network are been studying. MG
system in this article included two distributed generation (DG) units. All of DG
units and loads are connected in parallel at point of common coupling (PCC). In
islanding mode, according to violence dependence of system's dynamic to local
load changes and stability improvement after fault occurrence, design of
controller algorithm is necessary. In this article, demonstrate that to frequency-
load control, one of DG units is master and the other one is slave. Proposed
controller based on energy storage system is design according to load uncertain.
In final section of article, due to demonstrating the improvement and superior
robustness of proposed controller to load dynamic, fault occurrence in system
and controller capability in over demand supply and decrease short term
produced power, frequency and voltage control by energy storage system
cooperation that is one of novelty in this article, consider a comparison between
classic and proposed controller. Proposed control strategy under two scenarios
(load change and fault occurrence) has a good performance. Finally, propose
controller superior robustness performance evaluated by MATLAB software.
PDS16NXT07 TITLE: Improved parallel operation mode control of parallel connected 12-
pulse thyristor dual converter for urban railway power substations
ABSTRACT: Utility side consumers are in the remote areas who are not linked
by the central electrical grid network, hybrid systems such as
Photovoltaic/Microturbine have been considered as the most reliable, desired
and attractive unconventional source of power supply. This documents the
analysis and simulation of a Photovoltaic/Microturbine hybrid system using
MATLAB/SIMULINK, simpower system block sets as its simulation software. The
system is designed entirely based on the concept of a parallel hybrid
configuration. The software modeling of a Photovoltaic/Microturbine system
provides an in-depth understanding of the system operation before building the
actual system and also the testing and experiments of system operation under
disturbances is not possible on the actual system.
PDS16NXT09 TITLE: A Parallel Random Forest Algorithm for Big Data in a Spark Cloud
Computing Environment
ABSTRACT: With the emergence of the big data age, the issue of how to obtain
valuable knowledge from a dataset efficiently and accurately has attracted
increasingly attention from both academia and industry. This paper presents a
Parallel Random Forest (PRF) algorithm for big data on the Apache Spark
platform. The PRF algorithm is optimized based on a hybrid approach combining
data-parallel and task-parallel optimization. From the perspective of data-
parallel optimization, a vertical data-partitioning method is performed to reduce
the data communication cost effectively, and a data-multiplexing method is
performed is performed to allow the training dataset to be reused and diminish
the volume of data. From the perspective of task-parallel optimization, a dual
parallel approach is carried out in the training process of RF, and a task Directed
Acyclic Graph (DAG) is created according to the parallel training process of PRF
and the dependence of the Resilient Distributed Datasets (RDD) objects. Then,
different task schedulers are invoked for the tasks in the DAG. Moreover, to
improve the algorithm’s accuracy for large, high-dimensional, and noisy data, we
perform a dimension-reduction approach in the training process and a weighted
voting approach in the prediction process prior to parallelization. Extensive
experimental results indicate the superiority and notable advantages of the PRF
algorithm over the relevant algorithms implemented by Spark MLlib and other
studies in terms of the classification accuracy, performance, and scalability.
PDS16NXT10 TITLE: Data-driven fault detection for networked control system based on
implicit model approach
PDS16NXT13 TITLE: Improved Holistic Analysis for Fork-Join Distributed Real-Time Tasks
supported by the FTT-SE Protocol
PDS16NXT15 TITLE: Distributed and parallel real-time control system equipped FPGA-Zynq
and EPICS middleware
ABSTRACT: Zynq series of Xilinx FPGA chips are divided into Processing System
(PS) and Programmable Logic (PL), as a kind of SoC (System on Chip). PS with the
dual-core ARM CortexA9 processor is performing the high-level control logic at
run-time on Linux operating system. PL with the low-level Field Programmable
Gate Array (FPGA) built on high-performance, low-power, and high-k metal gate
process technology is connecting with a lot of I/O peripherals for real-time
control system. EPICS (Experimental Physics and Industrial Control System) is a
set of open-source-based software tools which supports for the Ethernet-based
middleware layer. In order to configure the environment of the distributed
control system, EPICS middleware is equipped on the Linux operating system of
the Zynq PS. In addition, a lot of digital logic gates of the Zynq PL of FPGA-Zynq
evaluation board (ZedBoard) are connected with I/O pins of the daughter board
via FPGA Mezzanine Connector (FMC) of ZedBoard. An interface between the
Zynq PS and PL is interconnected with AMBA4 AXI. For the organic connection
both the PS and PL, it also used the Linux device driver for AXI interface. This
paper describes the content and configuration of the distributed and parallel
real-time control system applying FPGA-Zynq and EPICS middleware.
PDS16NXT17 TITLE: Resilient distributed computing platforms for big data analysis using
Spark and Hadoop
PDS16NXT24 TITLE: A parallel primal-dual interior-point method for DC optimal power flow
ABSTRACT: Optimal power flow (OPF) seeks the operation point of a power
system that maximizes a given objective function and satisfies operational and
physical constraints. Given real-time operating conditions and the large scale of
the power system, it is demanding to develop algorithms that allow for OPF to
be decomposed and efficiently solved on parallel computing systems. In our
work, we develop a parallel algorithm for applying the primal-dual interior point
method to solve OPF. The primal-dual interior point method has a much faster
convergence rate than gradient-based algorithms but requires solving a series of
large, sparse linear systems. We design efficient parallelized and iterative
methods to solve such linear systems which utilize the sparsity structure of the
system matrix.
PDS16NXT25 TITLE: Parallel matrix neural network training on cluster systems for dynamic
FET modeling from large datasets
ABSTRACT: This paper presents a powerful and general parallel artificial neural
network training technique with parallel computing on cluster systems. Large
numbers of training samples are distributed to multiple computers on a cluster
system to achieve a high speed-up for training. The method is evaluated with
respect to two examples of an advanced dynamic nonlinear simulation model for
GaN transistors where the training set is measured large-signal waveform data
from an NVNA. For advanced models in a high dimensional space with large
training sample size, the proposed approach is demonstrated to reduce the total
training time by a factor of 35.
TRAINING AND SERVICES