You are on page 1of 2


Compressed Hierarchical Mining of Frequent Closed Patterns from Dense Data Sets
2. Game Theoretic Stochastic Routing for Fault Tolerance and Security in Computer Networks
3. Analysis and Optimization of Service Availability in an HA Cluster with Load-Dependent Machine Availability
4. Network-Supported Layered Multicast Transport Control for Streaming Media
5. Optimizing Dual-Core Execution for Power Efficiency and Transient-Fault Recovery
6. Community Mining from Signed Social Networks
7. Clustering over Multiple Evolving Streams by Events and Correlations
8. Mining of Reuse Patterns in CDMA Network Optimization
9. Traffic Engineering an Operational Network with the TOTEM Toolbox
10. Path-Computation-Element-Based Architecture for Interdomain MPLS/GMPLS Traffic Engineering: Overview and Performance
11. A Probabilistic Approach for Managing Mobile Ad-Hoc Networks

Compressed Hierarchical Mining of Frequent Closed Patterns from Dense Data Sets

Abstract—This paper addresses the problem of finding frequent closed patterns (FCPs) from very dense data sets. We introduce
two compressed hierarchical FCP mining algorithms: C-Miner and B-Miner. The two algorithms compress the original mining
space, hierarchically partition the whole mining task into independent subtasks, and mine each subtask progressively. The two
algorithms adopt different task partitioning strategies: C-Miner partitions the mining task based on Compact Matrix Division,
whereas B-Miner partitions the task based on Base Rows Projection. The compressed hierarchical mining algorithms enhance the
mining efficiency and facilitate a progressive refinement of results. Moreover, because the subtasks can be mined independently,
C-Miner and B-Miner can be readily paralleled without incurring significant communication overhead. We have implemented C-
Miner and B-Miner, and our performance study on synthetic data sets and real dense microarray data sets shows their
effectiveness over existing schemes. We also report experimental results on parallel versions of these two methods.

Game Theoretic Stochastic Routing for Fault Tolerance and Security in Computer Networks

Abstract—We introduce the Game-Theoretic Stochastic Routing (GTSR) framework, a proactive alternative to today’s reactive
approaches to route repair. GTSR minimizes the impact of link and router failure by 1) computing multiple paths between source
and destination and 2) selecting among these paths randomly to forward packets. Besides improving fault tolerance, the fact
that GTSR makes packets take random paths from source to destination also improves security. In particular, it makes
connection eavesdropping attacks maximally difficult as the attacker would have to listen on all possible routes. The approaches
developed are suitable for network layer routing, as well as for application layer overlay routing and multipath transport
protocols such as the Stream Control Transmission Protocol (SCTP). Through simulations, we validate our theoretical results and
show how the resulting routing algorithms perform in terms of the security/fault-tolerant/delay/throughput trade-off. We also
show that a beneficial side effect of these algorithms is an increase in throughput, as they make use of multiple paths.

Analysis and Optimization of Service Availability in an HA Cluster with Load-Dependent Machine Availability

Abstract—Calculations of service availability of a High-Availability (HA) cluster are usually based on the assumption of load
independent machine availabilities. In this paper, we study the issues and show how the service availabilities can be calculated
under the assumption that machine availabilities are load dependent. We present a Markov chain analysis to derive the steady-
state service availabilities of a load-dependent machine availability HA cluster. We show that with a load-dependent machine
availability, the attained service availability is now policy dependent. After formulating the problem as a Markov Decision Process,
we proceed to determine the optimal policy to achieve the maximum service availabilities by using the method of policy iteration.
Two greedy assignment algorithms are studied: least load and first derivative length (FDL) based, where least load corresponds
to some load balancing algorithms. We carry out the analysis and simulations on two cases of load profiles: In the first profile, a
single machine has the capacity to host all services in the HA cluster; in the second profile, a single machine does not have
enough capacity to host all services. We show that the service availabilities achieved under the first load profile are the same,
whereas the service availabilities achieved under the second load profile are different. Since the service availabilities achieved
are different in the second load profile, we proceed to investigate how the distribution of service availabilities across the services
can be controlled by adjusting the rewards vector.

Network-Supported Layered Multicast Transport Control for Streaming Media

Abstract—Multicast is very efficient in distributing a large volume of data to multiple receivers over the Internet. Layered
multicast helps solve the heterogeneity problem in multicast delivery. Extensive work has been done in the area of layered
multicast, for both congestion control and error control. In this paper, we focus on network-supported protocols for streaming
media. Most of the existing work solves the congestion control and error control problems separately and does not give an
integrated efficient solution. In this paper, after reviewing related work, we introduce our proposed protocols, namely, Router-
Assisted Layered Multicast (RALM) and Router-Assisted Layered FEC (RALF). The former is a congestion control protocol, whereas
the latter is an error control protocol. They work under the same framework and provide an integrated solution. We also extend
RALM to RALM-II, which is compatible with Transmission Control Protocol (TCP) traffic. We analyze the complexity of the
proposed protocols in the network and investigate their performance through simulations. We show that our solution achieves
significant performance gains with reasonable additional complexity.

Optimizing Dual-Core Execution for Power Efficiency and Transient-Fault Recovery

Abstract—Dual-core execution (DCE) is an execution paradigm proposed to utilize chip multiprocessors to improve the
performance of single-threaded applications. Previous research has shown that DCE provides a complexity-effective approach to
building a highly scalable instruction window and achieves significant latency-hiding capabilities. In this paper, we propose to
optimize DCE for power efficiency and/or transient-fault recovery. In DCE, a program is first processed (speculatively) in the
front processor and then reexecuted by the back processor. Such reexecution is the key to eliminating the centralized structures
that are normally associated with very large instruction windows. In this paper, we exploit the computational redundancy in DCE
to improve its reliability and its power efficiency. The main contributions include: 1) DCE-based redundancy checking for
transient-fault tolerance and a complexity-effective approach to achieving full redundancy coverage and 2) novel techniques to
improve the power/energy efficiency of DCE-based execution paradigms. Our experimental results demonstrate that, with the

1 Gathered By Prasanna Kumar Palepu 0091-9884245698

proposed simple techniques, the optimized DCE can effectively achieve transientfault tolerance or significant performance
enhancement in a power/energy-efficient way. Compared to the original DCE, the optimized DCE has similar speedups (34
percent on average) over single-core processors while reducing the energy overhead from 93 percent to 31 percent.

Community Mining from Signed Social Networks

Abstract—Many complex systems in the real world can be modeled as signed social networks that contain both positive and
negative relations. Algorithms for mining social networks have been developed in the past; however, most of them were
designed primarily for networks containing only positive relations and, thus, are not suitable for signed networks. In this work,
we propose a new algorithm, called FEC, to mine signed social networks where both positive within-group relations and negative
between-group relations are dense. FEC considers both the sign and the density of relations as the clustering attributes, making
it effective for not only signed networks but also conventional social networks including only positive relations. Also, FEC adopts
an agent-based heuristic that makes the algorithm efficient (in linear time with respect to the size of a network) and capable of
giving nearly optimal solutions. FEC depends on only one parameter whose value can easily be set and requires no prior
knowledge on hidden community structures. The effectiveness and efficacy of FEC have been demonstrated through a set of
rigorous experiments involving both benchmark and randomly generated signed networks.

Clustering over Multiple Evolving Streams by Events and Correlations

Abstract—In applications of multiple data streams such as stock market trading and sensor network data analysis, the clusters of
streams change at different time because of the data evolution. The information of evolving cluster is valuable to support
corresponding online decisions. In this paper, we present a framework for Clustering Over Multiple Evolving sTreams by
CORrelations and Events, which, abbreviated as COMETCORE, monitors the distribution of clusters over multiple data streams
based on their correlation. Instead of directly clustering the multiple data streams periodically, COMET-CORE applies efficient
cluster split and merge processes only when significant cluster evolution happens. Accordingly, we devise an event detection
mechanism to signal the cluster adjustments. Thecoming streams are smoothed as sequences of end points by employing
piecewise linear approximation. At the time when end points are generated, weighted correlations between streams are updated.
End points are good indicators of significant change in streams, and this is a main cause of cluster evolution event. When an
event occurs, through split and merge operations we can report the latest clustering results. As shown in our experimental
studies, COMET-CORE can be performed effectively with good clustering quality.

Mining of Reuse Patterns in CDMA Network Optimization

Abstract - CAP, a classical problem in cellular mobile communication networks such as GSM/GPRS or CDMA cellular mobile
networks, refers to assigning a limited number of available channels to many mobile voice or data users while minimizing
electromagnetic interferences. Most existing researches on CAP focus mainly on channel separation constraints, but pay little
attention to geometric constraints defined by reuse patterns that is of great value in engineering applications. An extended CAP
model named RCAP (Reuse-pattern-oriented CAP) is proposed, which takes account of both of these two constraints. A
systematic solution to RCAP is elaborated, which includes two sequential steps, i.e., partition of reuse groups and channel
assignment in the groups. Partition of reuse groups is reduced to subgraph isomorphism and can be tackled by subgraph mining
while channel assignment in groups is solved in a heuristic manner. The effectiveness of RCAP and its solution was verified by
three benchmarks. RCAP was applied to PN offsets assignment in the CDMA networks in several cities of Liaoning Province, and a
case study is illustrated in the paper. These results indicate that the proposed RCAP scheme is a very promising solution to
channel assignment problems in practical CDMA networks.

Traffic Engineering an Operational Network with the TOTEM Toolbox

Abstract—We explain how the TOTEM toolbox can be used to engineer an operational network. TOTEM is an open source TOolbox
for Traffic Engineering Methods which covers IPbased and MPLS-based intradomain traffic engineering (TE) algorithms, but also
interdomain TE. In this paper, we use the toolbox as an off-line simulator to optimise the traffic of an operational network. To
help an operator choose between an IP-based or MPLS-based solution, or find the best way to loadbalance a network for a given
traffic, our case study compares several IP and MPLS routing algorithms, evaluates the impact of hot-potato routing on the
intradomain traffic matrix, and analyses the worst-case link failure. This study reveals the power of a toolbox that federates
many traffic engineering algorithms.

Path-Computation-Element-Based Architecture for Interdomain MPLS/GMPLS Traffic Engineering: Overview and Performance

Abstract - The Path Computation Element Working Group at the Internet Engineering Task Force is chartered to specify a PCE-
based architecture for the path computation of interdomain MPLS- and GMPLS-based traffic engineered label switched paths. In
this architecture, path computation does not occur at the head-end LSR, but on another path computation entity that may not be
physically located on the headend LSR. This method is greatly different from the traditional “per-domain” approach to path
computation. This article presents analysis and results that compare performance of the PCE architecture with the current state-
of-the-art approach. Detailed simulations are undertaken on varied and realistic scenarios where preliminary results show several
performance benefits from the deployment of PCE. To provide a complete overview of significant development taking place in this
area, milestones and progress at the IETF PCE WG are also discussed.

A Probabilistic Approach for Managing Mobile Ad-Hoc Networks

Abstract—A pure management approach where all the nodes are managed at any time is too strict for mobile ad-hoc networks.
Instead of addressing the management of the whole network, we propose a probabilistic scheme where only a subset of nodes is
managed in order to provide a light-weight and efficient management. These nodes are determined based on their network
behavior to favor subsets of well connected and network participating nodes. With respect to such a selective management
scheme, we derive probabilistic guarantees on the percentage of nodes to be managed. Our contribution is centered on a
distributed self-organizing management algorithm at the application layer, its efficient deployment into a management
architecture and on a comprehensive simulation study. We will show how to organize the management plane by extracting
spatio-temporal components and by selecting manager nodes with several election mechanisms based on degree centrality,
eigenvector centrality and K-means paradigm.

2 Gathered By Prasanna Kumar Palepu 0091-9884245698