This article has been accepted for publication in a future issue of this journal, but has not been

fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TCC.2016.2594172, IEEE
Transactions on Cloud Computing

IEEE TRANSACTIONS ON CLOUD COMPUTING, VOL. XX, NO. XX, MONTH YEAR 1

Cost-Aware Multimedia Data Allocation for
Heterogeneous Memory Using Genetic
Algorithm in Cloud Computing
Keke Gai, Student Member, IEEE, Meikang Qiu, Member, IEEE, Hui Zhao Student Member, IEEE

Abstract—Recent expansions of Internet-of-Things (IoT) applying cloud computing have been growing at a phenomenal rate.
As one of the developments, heterogeneous cloud computing has enabled a variety of cloud-based infrastructure solutions, such
as multimedia big data. Numerous prior researches have explored the optimizations of on-premise heterogeneous memories.
However, the heterogeneous cloud memories are facing constraints due to the performance limitations and cost concerns caused
by the hardware distributions and manipulative mechanisms. Assigning data tasks to distributed memories with various capacities
is a combinatorial NP-hard problem. This paper focuses on this issue and proposes a novel approach, Cost-Aware Heterogeneous
Cloud Memory Model (CAHCM), aiming to provision a high performance cloud-based heterogeneous memory service offerings.
The main algorithm supporting CAHCM is Dynamic Data Allocation Advance (2DA) Algorithm that uses genetic programming
to determine the data allocations on the cloud-based memories. In our proposed approach, we consider a set of crucial factors
impacting the performance of the cloud memories, such as communication costs, data move operating costs, energy performance,
and time constraints. Finally, we implement experimental evaluations to examine our proposed model. The experimental results
have shown that our approach is adoptable and feasible for being a cost-aware cloud-based solution.

Index Terms—Cloud computing, genetic algorithm, heterogeneous memory, data allocation, multimedia big data

!

1 I NTRODUCTION innovative data allocation approach for minimizing total costs
of the distributed heterogeneous memories in cloud systems.
The advance of cloud computing has motivated a variety of Contemporary cloud infrastructure deployments mainly
explorations in information retrieval for big data informatics mitigate data processing and storage to the clouds [7]. Central
in recent years. Heterogeneous clouds are considered a fun- Processing Units (CPU) and memories offering processing
damental solution for the performance optimizations within services are hosted by individual cloud vendors. This type of
different operating environments when the data processing deployment can meet the processing and analysis demands for
task becomes a challenge in multimedia big data [1], [2].The those smaller sized data [8]–[10]. The incontinuous or peri-
growing demands of multimedia big data have driven the odically changeable implementations of the big data-oriented
increasing amount of cloud-based applications in Internet-of- usage have caused bottlenecks for constructing firm perfor-
Things (IoT). Combining heterogeneous embedded systems mances [11]. For example, some data processing tasks are
with cloud-oriented services can enable various advantages tightly associated with the industrial trends or operations, such
in multimedia big data. Currently, cloud-based memories are as annual accounting and auditing [12], [13]. Therefore, a
mostly deployed in a non-distributive manner on the cloud flexible approach meeting dynamic usage switches has become
side [3]. This deployment causes a number of limitations, such an urgent requirement in high performance multimedia big data.
as overloading energy, additional communications, and lower Moreover, another challenge is that deploying distributed
performance resource allocation mechanism, which restricts the memories in clouds is still facing a few restrictions [14], [15].
implementations of the cloud-based heterogeneous memories Allocating data to multiple cloud-based memories results in
[4]–[6]. This paper concentrates on this issue and proposes an obstacles due to the diverse impact factors and parameters [16].
The divergent configurations and varied capabilities can limit
• K. Gai is with the Department of Computer Science, Pace University,
the entire system’s performance since a naive task separation is
New York, NY 10038, USA, kg71231w@pace.edu. inefficient to fully operate heterogeneous memories. It implies
• M. Qiu (Corresponding author) is with the Department of Computer that minimizing the entire cost of using cloud memories is
Science, Pace University, New York, NY 10038, USA, mqiu@pace.edu. restricted by multiple dimensional constraint conditions.
• H. Zhao is with Software School, Henan University, Kaifeng, Henan,
475000, China, zhh@henu.edu.cn To address the main challenge, we propose a novel approach
• This work is supported by NSF CNS-1457506 and NSF CNS-1359557. entitled Cost-Aware Heterogeneous Cloud Memory (CAHCM)
Manuscript received October XX, 2015 Model, which aims to achieve a reduced data processing time

2168-7161 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Next. 1 represents the architecture of cloud. we propose an algo. The operating principle of the proposed model which is also an approach for generating an optimized is using genetic algorithm to minimize the processing costs data processing mechanism in cloud-based multimedia via mapping data processing capabilities for different datasets data processing. will be applied.1 Multimedia Big Data efficiency in generating solutions to data allocations.ieee. The main contributions of our research data mechanisms applied in the multimedia field. time. Using our proposed scheme can facilitate a high performance of the 2. NO. in For supporting the proposed model. the scaled up capability memories deployed in clouds. in these two hemispheres are theoretical supports for our duce a new approach for solving big data problems that request research background. The task assignments are completed by a data Next. problems by dynamically allocate data to various cloud as-a-Service (MaaS). resources. the entire usage rate of the cloud infrastructure along based heterogeneous memory in which our proposed model with the enhancement of the computation capability.1109/TCC. Dynamic Data Allocation Advance (2DA) Algorithm. Furthermore. [18]. explored to prove that social images shared on cloud-based mal solutions under certain constraints and suboptimal multimedia had a high level of similarities when users are solutions with non-constraint. See http://www. of using multimedia to further expand the outcomes of big neous cloud memories to solve multimedia big data sized data mining. One research was which is a NP-hard problem. 1) We venture to solve the cost optimization problem First. cloud vendors. we express a motivational example concerning the ap- allocation operation in which tasks are assigned to cloud mem- plication of the proposed model in a simple implementation ories based on the mapping of the memory capabilities to the scenario in Section 3. ities. inputs. We focus on mainly include three aspects: the computation workload dimension even though a few other concentrations have been addressed by the prior researches. algorithm. In addition. IEEE Transactions on Cloud Computing IEEE TRANSACTIONS ON CLOUD COMPUTING. main concepts and the input data. experimental evaluations are stated in which is designed to obtain the minimum total costs using Section 6. research in Section 7. such as energy.org/publications_standards/publications/rights/index. We consider a group of crucial factors that can impact on the costs when using cloud-based distributed heterogeneous memories. Fig. Personal use is permitted. Citation information: DOI 10. the high performance-oriented data mining is one on heterogeneous memories in a polynomial time. management for multimedia in cloud computing. . XX. but has not been fully edited. which configures the experimental settings and genetic algorithm. we conclude the allocation to heterogeneous cloud memories with varied capac. Finally. This research was an attempt 2) The proposed model is an attempt in using heteroge.2016. The target of This section has reviewed two crucial aspects related to our the proposed algorithm is to provide data allocation mechanism research. and time constraints. Content may change prior to final publication. XX. computations in heterogeneous memories located in various Section 2 reviews recent academic works in the relative fields. VOL. The architecture of cloud-based heterogeneous memories using data allocation techniques in big data through heterogeneous cloud memories for efficient Memory.html for more information. research direction in multimedia big data. [20]. including multimedia big data and cloud resource gaining the minimum computing costs. Moreover. MONTH YEAR 2 Fig. Investigations The significance of this research is that the findings intro. energy performance. The operating flow is based on the distributed parallel The remainder of the paper is organized by the follows. 1. socially connected [19]. such as communication costs. we provide detailed descriptions about the proposed rithm. but republication/redistribution requires IEEE permission. The costs can be any expenditures dur. and communication 3) We produce an approach that can be used to increase sources [17]. The outcomes are opti. which is Multimedia big data is a new technical term describing big a NP-hard problem. data-oriented scheduling was also 2168-7161 (c) 2016 IEEE. Section 5. proposed model statements are given in Section 4.This article has been accepted for publication in a future issue of this journal. Implementing 2DA enables a dynamic data exhibits some experimental findings.2594172. data move operating 2 R ELATED W ORK costs. ing the operations.

weights of the data by predictions [26]. a variety of preprocessing tech. Computing the data having challenging when the number of the parameters or variables higher-level weights can obtain approximate solutions [25].2594172. C means the costs happened to communication in increasing the entire system’s performance [28]. . prior researches. Meanwhile.org/publications_standards/publications/rights/index.1109/TCC. β is a criterion for memory limits. the cost of the implemen- To address this challenge. See http://www. Assume that there are Computing four cloud vendors offering MaaS with different performances. [30]. this approach our research proposes an adaptive method that can ensure a highly depends on the fault tolerance rate. but has not been fully edited. Table 2 represents different [29]. This Moreover. to adapt dynamic workloads in cloud computing [36]. most previous research work has addressed the computing is an approach to increase cloud service efficiency software improvement for multimedia big data rather than the and its challenge is data allocations to diverse memories. availability of the parameters for mapping cost requirements. such as the amount of data applying resource scheduling algorithms [21]. reads and writes.html for more information. Table 1 displays the costs for tributive manner. such as in processes through the Internet. In the perspective. M2 .ieee. in our research. but republication/redistribution requires IEEE permission. This caused by bandwidths and data diversity [37]. Communications (C). The approach cannot high-efficiency of resource management method generations. resources. Personal use is permitted. The hardware side. MV represents the costs taken Internet of Things (IoT). There are crucial aspects in cost-aware approaches was saving energy two working modes based on β s. some other scheduling mining values. It results in the difficulties of gain an adaptable solution Some other data mining uses time series the determine the that can be completed in a polynomial time. The approach of maximizing cloud resources different cloud memory operations. VOL. has been explored by recent researches in different perspectives. such that the balance of computation workload computing such that the potential optimizations were ignored. cloud memories have different mem- Our proposed approach also requires a few parameters for the ory service offerings. this type of solutions is partition techniques for optimizations. For example. namely M1 . Despite search focusing on reducing data transfer workloads used data many advantages of feature selections. and M4 . tations will become greater while the amount of VMs in- niques have been examined by recent researches as well. XX. the computation workloads were not considered in this research. XX.2 Resource Management for Multimedia in Cloud processes of CAHCM in this section. an approach was proposed to use optimizations focused on reducing the latency and constraints different constraints for re-ranking the image features [27]. most prior still attached to the software side. A similar re- technique was also combined with Web services [23]. solving the resource management problem is still less impact on the results [24]. However. [32]. M3 . This type of optimizations mainly depends on the performance critical points for different cloud memory limits. Citation information: DOI 10. we consider four main costs that First. cloud task scheduling was also a research the computation workload is reduced. As shown in Table 1. and Move research topic in cloud computing. which was the main bottleneck for increasing the performance resource management in cloud computing is considered an of the multimedia big data. Applying the heterogeneous memory in cloud Therefore. [22]. ensure the accuracy if the rate is well configured even though Furthermore. NO.2016. Most data mining techniques can option of maximizing the security level by applying multiple be directly used in multimedia big data field [23]. M1 has a critical capability 2168-7161 (c) 2016 IEEE. One of the to another set. MONTH YEAR 3 an alternative for enhancing the computation capability by purpose of resource management. For example. Combining various inspection processing efficiency was a challenging issue due to the data approaches on multiple VMs can increase the threat detection size feature of the multimedia. Moreover. Correspondingly. we concentrated on a crucial direction. capability [33]. and outcome quality is still the critical part in this research Therefore. Content may change prior to final publication. The optimizations usually address the provisions One data preprocessing operation was configured in wireless of the heterogenous computing from distributed resources [35]. Write (W). creases [34]. However. One recent opti- producing an innovative data allocation method for improving mization dimension was using iterative ordinal optimizations efficiency without reducing workloads. Computing resources on cloud computing are deployed in a dis. The relations between nodes (MV). including a normal working by applying scheduling algorithms on Virtual Machines (VMs) status and an over limit status. Prior researches have addressed a few place at the occasions when switching from one memory vendor dimensions increasing the system’s performance. 3 M OTIVATIONAL E XAMPLE We give a motivational example for clarifying the operational 2. interconnecting various clouds has become a popular include Read (R). IEEE Transactions on Cloud Computing IEEE TRANSACTIONS ON CLOUD COMPUTING. from the perspective of security and forensics. But the constraints via VMs [31]. However. Our work targets at increasing the computation speed research dimension that had rarely been addressed by the by optimizing the method of data allocations. multimedia sensor networks by deleting those data that have However. grows. R and W refer to the operation costs of reading and in cloud computing are considered as one of the crucial aspects writing data. The computation speed was researches did not consider implementing heterogeneous cloud not increased. According to Table 2. Our research focuses on using cloud-based target addressed by our research is producing a method that can memories by deploying distributed heterogeneous computing efficiently produce data allocation plans with minimum costs. Our work focuses on direction in resource management for big data.This article has been accepted for publication in a future issue of this journal. the feature selection was a research direction research aimed to gain sub-optimal scheduling solutions by that could be used to eliminate those data with less data optimizing each iteration.

2016.06 E → M2 . See http://www. There are 6 input data. M1 . Moreover. F → M3 and G → M1 with the total cost G 2 3 3. E → M3 . including Read. Compare with First-In-First-Out (FIFO) algorithm. M3 .2 3.63 E 3 1 2. The Number of Memories 2 2 3 3 sorted order for A is M3 .org/publications_standards/publications/rights/index. our approach reduces 96.ieee. XX. we always select the lowest costs first.2 <3. B → M2 . In our proposed 2168-7161 (c) 2016 IEEE. F → M4 . main costs derive from four aspects.30 example. we sort the data in an ascending order in Fig. which have been is 13 deriving from (7+6).This article has been accepted for publication in a future issue of this journal. and G as well as their corresponded sizes. including A. M2 . the data allocation costs for data A are β1 (GB) <4 <3. F 6 6 2.35 GB. which means the performance will be dramati- cally diminish when the input dataset is larger than 4 GB.70 culation of the proposed genetic algorithm. 48 (M2 ).5% cost in this case.html for more information. our approach can reduce 3. TABLE 3 The number of memory accesses and data sizes which starts with the top of the table to the bottom. 10 10 10 10 G 38 2513 18 300 M1 0 4 6 40 M2 3 0 5 40 Move (MV) M3 2 3 0 40 M4 40 40 40 0 step is sorting the cost by the memory index number for each data. 3 → M3 . MONTH YEAR 4 TABLE 1 TABLE 4 The costs for different cloud memory operations. from the left to right shown in A 7 6 2. 1. The remapping table is named as D Table. NO. 34 (M3 ). TABLE 5 D Table: Sorted memory indices deriving from Table 4 point at 4 GB. and Move. by summing up the number of Read and Write. and M4 . IEEE Transactions on Cloud Computing IEEE TRANSACTIONS ON CLOUD COMPUTING.76 solution as A → M3 . Personal use is permitted. According to the memory availability. cepts used in the model in this section.2594172. we always select the lowest cost Data Reads Writes Sizes (GB) from the available memories.).1 Execution Model and Definition based on the data from B Table. First. For G M3 (18) M1 (38) M4 (300) M2 (2513) E M3 (15) M2 (23) M1 (30) M4 (250) instance. We assign an index number to each memory by 1 → M1 . which is shown in the table. D → M2 . but has not been fully edited. and 4 → M4 . E. Before the data are assigned to cloud memories. Table 3 represents the number of memory F M3 (68) M2 (86) M1 (110) M4 (610) accesses and data sizes. XX. and 700 (M4 ). and G → M3 . This table is called B Table. and A M3 (34) M2 (48) M1 (77) M4 (700) 3 M4 . we remap the data that are shown in Table 5. but republication/redistribution requires IEEE permission. 2 → M2 . Data M1 M2 M3 M4 A 77 48 34 700 Operations PAR M1 M2 M3 M4 B 55 34 27 500 β1 4 2 1 50 C 118 7050 70 710 Read (R) β2 500 500 500 500 D 23 1010 1015 150 β1 6 4 2 50 E 30 23 15 250 Write (W) β2 500 500 500 500 F 110 86 68 610 Comm. D. A the entire input data need to be partitioned. and M4 in Table 5. Citation information: DOI 10.5 4 respectively 77 (M1 ). an adjustment will be made after a cal- B 6 3 2. The next addressed by the prior researches [38]–[40]. which is used We demonstrate the proposed model and define the main con- to mapping all the costs for all potential activities of the data. VOL. C → M3 . In this motivational C 8 6 3. B M3 (27) M2 (34) M1 (55) M4 (500) B. For example. algorithm. which is used to generate optimized memory selections 4. B → M2 . . M2 . According to the memory availability. The operating principle The mapped the costs and the number of accesses match of the model is given along with the proof in this section. There are two steps in the The execution model is based on the architecture represented remapping process. Thus. D → M1 . (C) . C. M1 M2 M3 M4 For example. Communications (Com. Content may change prior to final publication. 3 M3 . C → M4 .35 Table 5. the conditions of our proposed model. 4. The costs of allocating each data to memory units 4 C ONCEPTS AND THE P ROPOSED M ODEL are given in Table 4.7% cost in this example.5 <4 β2 (GB) 4 3. the performance critical points.36 290. Based on the remapping. we select memories for each data. Compare with Greedy Assume that the initialized data allocations are A → M2 .1109/TCC. using our proposed approach can generate an optimal D 1 1 3. F. from the left side to the right side in the table. in Table 4. The C M3 (70) M1 (118) M4 (710) M2 (7050) example configuration is that there are 2 M1 . 2 M2 . Four types of B Table: The costs of allocating each data to different memory units memories are M1 . Data A are required to read 7 times and write 6 times D M1 (23) M4 (150) M2 (1010) M3 (1015) and the data size is 2. Write. parameter (PAR) β represents considering β . Furthermore. We sort the index for each TABLE 2 Performance critical points for different cloud memory limits and the data according to the costs of the data allocations given in Table number of memories.

The volume of bins are memories’ capacities. and the values are costs of data operations. Memory Availability refers to the availabilities of the memo- ries. Being aware of the optimal riving from memories or operations between memories.3 Optimal Constraints for Genetic Algorithm corresponding to each available memory and the corresponding Our work has found that using Greedy algorithm could pro- table in Section 3 is Table 1. Fig. Operation flow model of Cost-Aware heterogeneous cloud reducible to Bin Packing Problem (BPP) that is a combinatorial memory model NP-hard problem [41]. a Move cost is one of which the information of data operations in each cloud memory the costs that need to be considered. The minimum cost can be found from a permutation of P(m. The output is a data allocation i=1 scheme based on the available cloud memories provisioning the n = (CiR × NiR + CiW × NiW + CiM V + CiOther ) optimal solution for minimizing total costs. The aimed 4. The capacities of the heterogeneous the Read. To address the cost minimizations. See http://www. supported by our proposed algorithm. Moreover. the crucial component of CAHCM is that the (1) task assignments to heterogeneous memories offered by distinct The total cost is a sum of all memories’ costs with four cloud vendors based on the memories’ availabilities. the data is gained.2594172. which depends and NiW refers to the number of the Write. the number of memories and availabilities. the costs are mapped the constraint conditions and suboptimal solutions under the mainly deriving from three categories of information. the heterogeneous memories can construct a few bins such that each bin can refer to one memory. 2. which are mapping as networking communications. types of costs. maps the critical capability points for different memories. the initial locations of the data. Compare with the traditional on-premises cost. The task addressed by the proposed CAHCM is to allocation plan is generated at the second step. which may include amount. and costs. and CT otal = C(i) critical points for each memory. The following section cluding the number of data. Align with the problem is aligned with COPHM problem defined by Definition 1 in definition. for comparing the costs. In addition. Content may change prior to final publication. the required Section 4. after as transmission losses. P (m.ieee. but republication/redistribution requires IEEE permission. normally a traversing search is required. data sizes. accesses. The limits are determined by β that The inputs are the initial data allocations as well as memo. COPHM problem is also a NP-hard problem. CiR data allocation platform. Our proposed approach can gain optimal solutions within According to Fig. In our model.This article has been accepted for publication in a future issue of this journal. d). model of CAHCM model. For example. costs. we formulate Read and Write access amounts. VOL. such solution constraint is the critical component for achieving 2168-7161 (c) 2016 IEEE. i=1 Moreover. critical points of the memories. which ries’ availabilities. at the first step. and locations. Assume that there are m memories in total and d data. available memories’ number the cost calculation method shown in Eq. MONTH YEAR 5 model. but has not been fully edited. XX. The Memories Costs is the consumption information 4. the cloud memories system provides a manageable there are i memories operated in the data processing. CiM V refers to on the offerings of cloud providers. we assume that the data partitions are processed. IEEE Transactions on Cloud Computing IEEE TRANSACTIONS ON CLOUD COMPUTING. Other Costs means the costs vide optimal solutions under certain constraints and subopti- caused from other computing resources except the costs de. 2 represents an operation flow approach is minimizing the value of CT otal . such signment process mainly includes two steps. .2 Problem Statement problem is to find out the data allocation solution optimally The calculations of the total cost need to consider the cloud minimizing the total cost. Two sub- tasks form this step. and the Move between  n memories. we consider minimizing the total costs such that COPHM problem can be Fig. Second. the cost of data moves and CiOther refers to other costs. Write. Personal use is permitted. [42]. NO. d) = (m−d)!m! where d  m. capacities. 2: as well as the costs for Read.org/publications_standards/publications/rights/index. the costs of Read and Write for each memory. Using this information. mal solutions in other situations. such that and the cloud-based heterogeneous memories’ capacities. which is mainly execute data allocations for minimizing the total costs. an initial plan is created by using Definition 1. Cost Optimization Problem on Heterogeneous Greedy algorithm. connection status. In addition.1109/TCC. the items are input data. Therefore. CT otal nrefers to the total and capabilities. CiW refers to the cost of Write memories in cloud computing can be varying. the number of Read and Write provides the problem statement. Citation information: DOI 10. memories’ capabilities [5]. First. XX. the costs of the Move between memories. BPPp COPHM. The sum of all memories’ costs is i=1 C(i) showing memories. As shown in Eq. The non-constraint environment. in- the task assignment may be adjusted. 2. which allows tasks to be assigned to a refers to the cost of Read and NiR refers to the number of distributed memory system.2016. main inputs include the number of data. 2. the task as.html for more information. 2DA algorithm. The target of our proposed costs and task assignments.1. an improved data allocation plan Memories (COPHM): Given the initial status of input data is created by implementing the genetic algorithm. Correspondingly. time.

then Cia + Cjb  Cfb + Cka and the Align with Eq. (3) 5. the path is represented as −−→ done in order to improve the sub-optimal solution to being the BA meaning the data allocation order from data B to A. See http://www. The mathematical expressions of Ci are given as follows: M. represented as A = a1 .2016. x = ⎪ ⎪ C 1 = R1 × a 1 + W 1 × b1 + M C 1 ⎪ ⎨C a = R × a + W × b + M C {i. Mf . MONTH YEAR 6 optimal solutions. Cfa . Citation information: DOI 10. The table mapping is represented in Table 6. k}. and a path our proposed CAHCM is designed to achieve a high rate of a → b. the when the condition meets the requirements given in Eq. adjacent paths formed by Cia < Cxa < Cka . data A=aR . Mj . it needs to ensure that producing optimal solutions. k}. ⎪ ⎪ C b = R 1 × a 2 + W 1 × b 2 + M C 3 and M Ctotal = M Ci + M Cj − M Ck − M Cf . B = a2 .1.2594172. We use M Ci to present the sum of Move and Communication Theorem 4. j. data A=aR . we define the process of than (Cia + Cjb ). We present an approach of overcoming the In any cases. Since there are only two memories. Mk . optimal solution. IEEE Transactions on Cloud Computing IEEE TRANSACTIONS ON CLOUD COMPUTING. use G to denote the optimal solutions. {Mi . data A have the selection priority access number for data A.1109/TCC. aW  and B=bR . j. ∀ four adjacent data allocations Cia . 5. but has not been fully edited. P ⇒ Cia + Cjb < Cfb + Cka . Cia < Cka and Cfb < ⎧ a Cjb .html for more information. VOL. The costs of the memories consist of Rx and Wy . Cfb < Cja . Assume that there are two data.org/publications_standards/publications/rights/index. We use P to represent the satisfaction of Eq.This article has been accepted for publication in a future issue of this journal. other path is C2b + C1a .  for minimizing total costs under the four-memory configuration. Cfb < Cjb < Cyb . we gain the following Eq. The Proof. 5: Table mapping for the cost comparisons between data allocations  B C2b C1b Rf j bR + Wf j bW + Rki aR + Wki aW  M Ctotal A C2a C1a M Ctotal = M Ci + M Cj − M Ck − M Cf (5) First. and M Ctotal = M Ci + M Cj − M Ck − M Cf Therefore. which shows a fundamental relation of obtaining an adjustment is processed by our proposed genetic algorithm. Assume that P is true and G is false. 2168-7161 (c) 2016 IEEE. Ri refers to the Read cost for memory i. and aW refers to the Write and W1 > W2 . Rki = Rk − Ri . The representation of the optimal ⎪ ⎪ C = Ri × a R + W i × a W + M C i ⎪ ib ⎨ solution constraints are illustrated as follows. optimal solution. 2 2 1 2 1 2 Wf j = Wf − Wj . exists memories Mi . thus. y = {i. Wf − Wj . which locate at the right side of B → Memory j. Mj . f. represented as M1 = ⎪ ⎪ ⎪ C b = R f × bR + W f × bW + M C f R1 . 2. A C k = Rk × a R + W k × a W + M C k and B. If satisfying ⎪ ⎩ b 1 C 2 = R2 × a 2 + W 2 × b2 + M C 4 Rf j bR +Wf j bW +Rki aR +Wki aW  M Ctotal and priority selection principle. The proposed genetic algorithm is Cia + Cjb − Cfb − Cka  0. 4. using Greedy algorithm can generate suboptimal comparison for the optimal solution requirement is shown in solutions that need adjustments. . In Eq. It means that an adjustment needs to be For any adjacent data A and B. We set the Rf j = Rf − Rj . Content may change prior to final publication. 3. P ⇒ G.ieee. to the Write cost for memory i. The following proof For satisfying the optimal solution requirements. when there results are given in Eq. Personal use is permitted. which focuses on the adjacent path comparisons and decisions in D Table. Since P is true. Wi refers number of the Write. G is R12 a12 + W12 b12  M Ctotal (4) false ⇒ there exists a solution (Cxa + Cyb ) offering a lower cost Consider the practical scenario.1. bW . C j = R j × bR + W j × bW + M C j ∃ two memories M1 and M2 . The path from data A to B selecting according to our approach. Mf . Mk }⊆ j costs. Wki = Wk − Wi . Eq. the crucial deterministic Otherwise. NO. in which there is a sorted data set. but republication/redistribution requires IEEE permission. aW . We set (a1 + b1 ) > (a2 + b2 ). b1  and In the expression. 2. bW . R1 > R2 . Configure Rf j = Rf − Rj . optimal solution constraint by applying genetic algorithm. The accomplishment of the Eq. ∃ a set memories M. Therefore. 3 and consider three situations based on the comparisons between variables a1 and a2 as well as variables b1 and b2 . b2 . where the data allocation as a P . which data allocation A → Mi . B → Mj  is an optimal solution represents the constraints for obtaining optimal solutions. ¬G ⇒ (Cia + Cjb ) > (Cxa + Cyb ). Cij means the cost of using the i memory for data j . XX. 2. the TABLE 6 optimal solutions constraints are shown in Eq. ai is the access number of the Read and bi is the aR refers to the Read access number for data A. Ri refers to the Read cost and ⎩ fa Wi refers to the Write cost. Therefore. The theorem stating the optimal solutions restraints is given in C2b + C1a  C1b + C2a ⇒ C1a − C2a  C1b − C2b (2) Theorem 4. We consider data B the data having Cia → Cjb means data allocations are A → Memory i and lower selection priority than A. Cka .1. B=bR . (R1 − R2 )(a1 − a2 ) + (w1 − w2 )(b1 − b2 )  M Ctotal We use contradiction proof to prove the correctness of M Ctotal = M C2 + M C3 − M C1 − M C4 Theorem 4. For firming an optimal solution. ∃ Cia < Cka . XX. need to be assigned. The values of the data allocations designed by considering the optimal solution constraints. we execute proves that G is true when P is true. W2 . and Cja in D Table. Wki = Wk − Wi . our approach uses Greedy algorithm to determine Applying Greedy algorithm can produce optimal solutions the path C2a → C1b . Rki = Rk − Ri . which are: ⎧ a is represented in this section. W1  and M2 = R2 . f. Wf j = A in D Table.

23: RETURN DataAllocation rithm Aim to generate an adaptable data allocation solution. b) + C(k. as shown Algorithm in Eq. {Mmt (g)} selection and n means the data. The cost of Read access data according to the required costs. Cka ] with ⇒ {Mmt (g)} C(i. VOL. 2) Summing up Rd(i) + Wd(i) for each memory and sort The required number of the Read and Write accesses for each the values in a descending order. a) + C(j.1 illustrates 2DA algorithm and 20: end if Section 5. The next section provides 15: for i=1 to Constant do the detailed description about our crucial algorithms. Section 5. (5) to evaluate the obtained is producing optimal solutions under constraints or suboptimal results gained from the Greedy. input data is Rd(i) and Wd(i) . +∞) 3: ∀ data. The 4) Allocate the data to the available memories that have Move costs between memories are M→ − . a) Cyb ∈ [Cfb . Initialize all algorithms’ pseudo codes are given as follows. 16: for j=1 to data. k . our genetic algorithm emphasizes the constraints 11: The number of this type of memory -1 and configure the evolutive principle according to the findings 12: end if above.This article has been accepted for publication in a future issue of this journal. 10: /*Di → Mmt (g)*/ Therefore. sort data by the values of (Rd(i) + Wd(i) ) in a when ¬G ⇒ ∃C(x. Define Algorithm 5. offered by one cloud vendor is Nmt . The example of B Table is Table 4. n). we First. a). The initial plan created by greedy algorithm will be 13: end for assessed by the optimal solution constraints and determine 14: end for whether the plan needs to be adjusted. generation rounds.org/publications_standards/publications/rights/index.2594172. The comparison is based on the optimal solu- cation). Pseudo codes of be updated if lower costs are gained within the configured 2DA algorithm are given in Algorithm 5.1. The notations used in 1) Input B Table generated by Algorithm 5. 21: end for 22: end for 5.html for more information.2.1 Dynamic Data Allocation Advance (2DA) the formulation calculating the total costs is C(m. The data allocation plan will solutions within non-constraint conditions. a). The main phases of Algorithm 5. Next. The data size for each data is Sd(i) . b) < C(i. The cost of each Write is Cw β d(i) is illustrated by Table 5 in Section 3. ∃Cxa − Cia > Cib − Cfb 6: /*According to B Table*/ ⇒ Cib − Cyb > Cxa − Cia > Cib − Cfb ⇒ Cyb < Cfb 7: for ∀ the cost for each data allocation manner do 8: if The memory (Mmt (g)) is available then which conflicts with Cfb < Cyb . 9: Allocate data to this memory Proved. m means the memory Require: B Table.2016. See http://www. The number memories. b). cessed.2 exhibits the B Table Generation Algorithm. 19: /*Switch the memory selections*/ rithms of CAHCM. we provide descriptions of two crucial algo. C(f. which are based is Crβ d(i) when Sd(i) < β . of the input data is Nd . inputs mainly include infor- mation about the data and memories. algorithm. . XX. C(k. in which a series of comparisons are pro- In addition. k . XX. C(x. but has not been fully edited. Citation information: DOI 10. n). C(j. which means the data ab the highest priority till the last data by which the initial move from Memory a to b. Two crucial procedures are involved in this algorithm. we This algorithm is designed to solve COPHM problem in a use genetic algorithm to improve the solution. Ensure: DataAllocation Therefore. either an optimal solution or a sub-optimal solution. Personal use is permitted. MONTH YEAR 7 following the availability priority principle in D Table. b) Cxa ∈ [Cia .size do 17: if (Costj (P lanj )+Costj−1 (P lanj−1 ))> (Costj−1 (P lanj )+Costj (P lanj−1 )) then 5 A LGORITHMS 18: P lanj ↔ P lanj−1 In this section.1 Dynamic Data Allocation Advance (2DA) Algo. which is aligned with Di . !β when Sd(i) < β and the use Cw d(i) when Sd(i)  β . the output is Data Allocation Table (DataAllo. Nmt . Therefore. 2. the following deduction is gained: 1: Input B Table  2: Initialize all availabilities of the cloud memories. NO. use Cr!β d(i) when on mappings of B Table.1109/TCC. We use Di availability information about the heterogeneous cloud denote the input data and i refers to the data index. The 3) Sort the memory options in an ascending order for each critical points for each memory is β . 2168-7161 (c) 2016 IEEE. b) descending order < C(f. b). In the formulation C(m. we use Greedy algorithm to create a solution that can be propose Dynamic Data Allocation Advance (2DA) algorithm. Each type has k available 5) An adjustment is made by accomplishing a genetic memories and we use Mmt (g) to denote each memory. IEEE Transactions on Cloud Computing IEEE TRANSACTIONS ON CLOUD COMPUTING. The sorting results example Sd(i)  β . tion constraints defined by Eq. (5). The number of cloud memory type DataAllocation strategy is made. Nmt . Content may change prior to final publication. but republication/redistribution requires IEEE permission. a) + C(y.1. Otherwise. The outputs of implementing 2DA algorithm operating principle shown in Eq. which applies the polynomial time. C(y. P ⇒ G is true. Correspondingly. a) ⇒ Cxa − Cia < Cib − Cyb 4: for ∀ input data do 5: Sort memory allocation costs in an ascending order Followed by P.ieee.1 are: As mentioned in Section 4.

VOL. Two servers acted a remote server and represents the B Table Generation Algorithm. k . Algorithm 5. Rd(i) . As mentioned in Section 5. Cw d(i). which is one 6. Nmt . Cw d(i).ieee. M→ −. 10 1) Input all required data. XX. Cr!β d(i). 14: Data allocation cost to Mmt (g) ← Costtemp Moreover. For the purpose of simulating cloud algorithm is mapping the costs with the corresponding data systems. We defined the Accuracy Ratio (AR) 12: Costtemp ← Costtemp + Mab as the ratio between the optimal solution cost and Cost. Nd . as well as the corresponding We used experimental evaluations to assess the performance memory performances under different β s. Cw!β d(i). Sd(i) . k . 2168-7161 (c) 2016 IEEE. Sd(i) . Cw d(i). 4) Return B Table used as the inputs of Algorithm 5. Mj . IEEE Transactions on Cloud Computing IEEE TRANSACTIONS ON CLOUD COMPUTING. The purpose of this of the proposed scheme. !β One comparison was investigating the execution time differ- Cw d(i).1. two algorithms illustrated in this section sup- port two main procedures of the model. 500 MB. we had two main experimental settings that were 15: end for designed to assess the proposed approach’s performance in 16: end for different perspectives. 5.1 showed our experimental configurations. but republication/redistribution requires IEEE permission. Content may change prior to final publication. Setting 1-2. Nmt . Nmt .2 distributed cloud servers. Wd(i) . an on-premise client end. MB. M→ the distributed data offload in practical scenarios. NO. Crβ d(i). Mmt (g). including Di . Personal use is permitted. The following section In summary. MONTH YEAR 8 6) Update the plan if the better solution is found until the 3) Generate the Table B by assigning the values of the configured rounds end. These two settings were designed to examine the performance pacities.This article has been accepted for publication in a future issue of this journal. k .2 B Table Generation Algorithm and Section 6. XX.04 on bother servers. we estimated the efficiency by evaluating the execu- 1: Initialize all input data. we used two servers to imitate the operations of allocations in a computing friendly manner. Wd(i) . 13: Costtemp ← Costtemp + communication costs which used the following formula: AR = Cost/Costop . This algorithm considers the memory critical points. Cr d(i). (2) to calculate the value of total cost for each possible data allocations considering the memory ca. Setting 1-4. Nd . but has not been fully edited. This is the main procedure of Costtemp to the corresponding spots in Mmt (g). Section 6. Two main comparisons were done in this evaluation. First.2594172. 5: if Sd(i) < β then In addition.1. our experiment also tested the performance 6: Costtemp ← Costtemp + Rd(i) × Crβ d(i) of saving costs by evaluating the optimal and suboptimal 7: Costtemp ← Costtemp + Wd(i) × Cw β d(i) solutions proportions out of a great number of experiments.org/publications_standards/publications/rights/index. Mmt (g). respectively. 250 MB. Greedy and genetic algorithms. 2DA algorithm is an approach using both demonstrates our experimental evaluations. Mj . β . β ences between the proposed scheme and the brute force method 2: for ∀ data do when various amounts of the cloud memories are executed. we need to map the costs for each data and the corresponding memories in a B Table.1. ab tion time. Crβ d(i). Require: Di . which included: Setting 1-1. Setting 1-3. Rd(i) . The operating system configurations were Ubuntu 15. Nd .2016. including efficiency Ensure: B Table and cost-saving performances. tions in heterogeneous cloud memories. Two crucial ab β !β β Sd(i) .1 Experimental Configurations of the main inputs of Algorithm 5. 8: else We configured the difference (Cost) between the optimal 9: Costtemp ← Costtemp + Rd(i) × Cr!β d(i) solution costs (Costop ) and the costs (CostCAHCM ) required !β 10: Costtemp ← Costtemp + Wd(i) × Cw d(i) by our proposed approach. which is one of 6 E XPERIMENT AND THE R ESULTS the inputs of 2DA algorithm. Cr d(i). Rd(i) . Cw β d(i). We examined our proposed approach through an experimental evaluation. which is represented as Cost= 11: end if CostCAHCM − Costop . !β ab β !β • Setting 2: We simulated multiple memory deployments. Mj . Initialize a temporary from 5 to 20. M→ − . Wd(i) . The statement of two settings: 17: RETURN B Table • Setting 1: We used a group of settings to evaluate the The main phases of Algorithm 5.html for more information. β aspects were examined in our evaluations. β . Citation information: DOI 10.2 B Table Generation Algorithm data were generated in a random setting in order to simulate − . 2) Use Eq. 3: for ∀ memories {Mmt (g)} do The other comparison assessed the execution time trend of the 4: Initialize Costtemp ← 0 proposed model in various memory deployments.2 include: performance of CAHCM with different input data sizes. Di . 20 KB. to assess the performance of data alloca- variable Costtemp and assign an empty value. applying our proposed genetic algorithm. The next section describes the method of generating B Table generation.1109/TCC. which can produce an efficient solution within a short period time. See http://www. In summary. 7) Return the completed DataAllocation strategy for data allocations. The input Algorithm 5. .2 displayed partial our experimental results. were varied. Assign the values of the calculation outcomes in adaptability and execution time while the input data sizes to the temporary variable Costtemp . Mmt (g). Cr d(i).

1109/TCC. XX.8% when one cloud memory was added. 6. See http://www. 1-2. but has not been fully edited.3 ms. we further evaluated the execution time by to Fig. Fig. Time consumptions between CAHCM and Brute Force using 4and 5 memories under setting 1-1.org/publications_standards/publications/rights/index. memories’ needs. As shown in our experimental configurations.ieee.5% along with the number of the cloud memories grew. the average increased execution time was comparing various number of cloud memories. the execution time of applying CAHCM only had were given between the proposed CAHCM and other available a slight growth from 5 to 20 cloud memories. The execution time examinations mainly focused our proposed scheme could solve N-memory deployments in on various input data sizes while deploying different amounts a dramatical short period. The movement had a similar trend line Moreover. The performance comparisons the figure. Content may change prior to final publication. CAHCM could complete the task Fig. Execution efficiency performance comparisons using CAHCM using different amounts of cloud memories under setting 1-1. method were great. 1-3. Fig. Execution efficiency performance comparisons using CAHCM memories under setting 1-1. For example. Personal use is permitted. 3 represents the time consumptions between our pro- posed scheme and brute force method using 4 and 5 cloud Fig. memories cannot be accomplished in an acceptable polynomial time. Execution efficiency performance comparisons using CAHCM using different amounts of cloud memories under setting 1-3. which met the heterogeneous cloud of the cloud memories. IEEE Transactions on Cloud Computing IEEE TRANSACTIONS ON CLOUD COMPUTING. Fig.2 Experimental Results execution efficiency performance comparisons using CAHCM This section represented a few experimental results based on under different amounts of the cloud memories. In general. 3. 6 shows the execution efficiency performance evalua- tions under setting 1-3. Fig. As depicted in Fig. Citation information: DOI 10. .2016. MONTH YEAR 9 6. The average increase rate of the execution time was 7. The execution 218 ms. The average execution time increase rate of each cloud 20. 5. 5 represents the execution time evaluations under in a shorter period. which was much the number of the memories. 4 exhibits the memory increment was 4. The execution time had a positive relationship with data within 5. 2168-7161 (c) 2016 IEEE.2594172. NO. 5. The average increase rate of the execution time was time differences between the proposed approach and normal 6. 4. under settings 1-1. Fig. and 1-4. The data allocation tasks using 6 using different amounts of cloud memories under setting 1-2. using CAHCM could allocate setting 1-2. Fig.This article has been accepted for publication in a future issue of this journal. but republication/redistribution requires IEEE permission. It implies that algorithms.1 ms using 5 cloud memories. VOL. The average increase time was shorter than brute force that required 22903. XX. 3. from 5 to 550 ms.1%.html for more information.

Greedy. Our experimental results showed that the optimal solutions took R EFERENCES move than 90% out of all the examinations. And the algorithm could have a higher chances than CAHCM. Execution efficiency performance comparisons using CAHCM using different amounts of cloud memories under setting 1-4. When the rate was less than 1. 9 represents a comparison rate between CAHCM and other two algorithms. 20%]. 2%]. 2168-7161 (c) 2016 IEEE. Ratios of the optimal and suboptimal solutions examined by ra- number of the cloud memories in our experimental evaluations. The average of the increase rate for inserting 100%]. but republication/redistribution requires IEEE permission. Gopalakrishnan. Personal use is permitted. but has not been fully edited. Greedy algorithm could achieve 6. our proposed CAHCM approach had an obvious advantage in gaining acceptable solutions. Content may change prior to final publication. MONTH YEAR 10 the acceptable solutions was 95. Citation information: DOI 10. 5%]. AR7=(5%. Fig. AR2=(0. AR3=(1%. AR1=0%. IEEE Transactions on Multimedia. Greedy. 2013. AR11=(100%.ieee. a higher value of the rate meant a greater difference gap when the rate was greater than 1. The main algorithm was 2DA algorithm that could Next. On the whole. As the same configuration. in which algorithms.org/publications_standards/publications/rights/index. and V. 8. AR9=(20%. tios for solving four dimensional data allocations.1. As shown in the figure. 8 represents the comparisons of the optimal output optimal solutions at a high rate.06% chances to reach the acceptable solutions at [0. and Random has been assessed in the experimental evaluations. 9.2016. execution time. AR2=(0. Our proposed approach solutions proportions among CAHCM. Optimizing cloud resources for delivering IPTV to AR11 illustrated the different portions under various settings. AR10=(50%. Fig. IEEE Transactions on Cloud Computing IEEE TRANSACTIONS ON CLOUD COMPUTING. 7. In addition. AR3=(1%. The AR7=(5%. AR10=(50%. 7 C ONCLUSIONS Fig. The input data sizes also had a positive relationship with the 1%]. 10%]. 3%]. See http://www. it meant the assessed algorithm had lower chances than CAHCM. +∞]. AR6=(4%. NO. V. 10%]. services through virtualization. AR8=(10%. 5%].html for more information.This article has been accepted for publication in a future issue of this journal. And the average increase rate was 0. 10%]. 4%]. 50%]. 7 displayed some experimental results for evaluating execution efficiency under setting 1-4. AR4=(2%. which were given in the figure’s caption. AR6=(4%. AR4=(2%. +∞] proposed model CAHCM was designed to enable to mitigate the big data processing to remote facilities by using cloud-based memories. which was 93. AR1 depicts the percentage of the optimal solutions. AR11=(100%. 3%]. XX. The ratio of the optimal solutions was in an acceptable range. From AR2 [1] V. 50%]. 100%]. AR5=(3%. Therefore. In summary. XX. which could be applied in big data for smart cities.2594172. 1%]. Aggarwal. if the acceptable scope of AR was [0.7%.8%. 2%] that could be considered the set of optimal and suboptimal solution. Comparisons of the ratios of the optimal and suboptimal solu- tions examined by ratios for solving four dimensional data allocations This paper proposed a novel approach solving the problem for CAHCM. 10%]. As defined in Section 6. and Random algorithms. our pro- posed approach had a great advantage in execution efficiency. K. Ramakrishnan. 20%]. AR9=(20%. each cloud memory was 4. of data allocations in cloud-based heterogeneous memories. Furthermore. AR5=(3%. VOL. The average increase time was 246 ms. . 2%].8%. We defined Comparison Rate = Algorithm/CAHCM. Fig.1109/TCC. Vaishampayan. AR8=(10%. Fig. AR1=0%. The percentage of 15(4):789–801. The results depicted that our approach could obtain a dramatically high rate of obtaining optimal solutions. Jana. the portions of the optimal and suboptimal solutions. R. according to our experimental results.05% at the scope [0. Random algorithm could achieve 5. the execution time was associated with the Fig. 4%]. we used AR to represent performs reached the desired level.61%.

A metric between probability distributions on finite dependent tasks on high-density multi-GPU systems. M. Y. Gu. S. Garg. Khan. IEEE Transactions on (TODAES). L. Zong. Ren. 2015. M. 5(4):512–524. 26(1):97– [33] J. Wu. Zhao. [30] X. 23(8):1467–1479. NY. K. [27] J. MONTH YEAR 11 [2] F. Information [35] Q. and X. 2014. She. 2013. Nanjing. Dutt. Shi. A tenant-based resource allocation model for scaling Computing. Proactive as a Service for cloud computing infrastructures and services. H. Hu. and R. Martello. 61(2):237–250.ieee. R. Deng. Preprocessing techniques for high-efficiency data [3] M. Q. Molina. Online data allocation Computing. Zong. Ding. Molina. H. He. 2012. 22(5):1645–1658. Yang. [19] M. Dma++: On the fly data realignment for on-chip features for image re-ranking. 2015:1. Qiu. Xhafa. Food balance M. 2015. IEEE Transac. Faloutsos. IEEE Transactions 8(6):866–883. 2016. A. Rui. Weng. Sha. 7(3):346– Processing. Sookhak. Qiu. tasks oriented energy-aware scheduling in virtualized clouds. Zhuge. log. scheduling on cloud computing platforms with iterativeordinal opti- [15] C. Dynamic energy-aware cloud computing and its development trends. Gani. Yu. Ayguade. Maruyama. cloud with efficient verifiable fine-grained updates. 2(4):380–394. Zhong. 2(1):85–98. 1996. Yang. Zhuge. and D. 142–146. Zhani. MonPaaS: an adaptive monitoring Platform [32] M. for hybrid memories on embedded tele-health systems. IEEE Transactions on Cloud Computing IEEE TRANSACTIONS ON CLOUD COMPUTING. Zang. computing environments.1109/TCC. [22] P. Y. X. Yin. SLA-Based resource Transactions on Cloud Computing. memories. pages 464–471. Duan. IEEE 2015. M. R. Systems. Park and J. A. ACM. but has not been fully edited. and W. on Parallel Processing (ICPP). Hellerstein. proof of retrievability in cloud computing with resource-constrained [12] C. 2013. X. 2016. Multi-aspect. T. 57(10):2464–2477. IEEE Transactions on Cloud Computing. Guo. and W. Yoo. pages 694–699. 2015. controls for mobile clouds in financial industry. Min. Transactions on Computers. 48:108–124. Cao. and S. In 2012 Fourth Int’l cloudlet-based mobile cloud computing model for green computing. Multi-objective game theoretic tions on Parallel and Distributed Systems. Gonzalez. 358. and J. IEEE Transactions on Services Computing. Zhan. Data mining with big data. Aizawa. New York. [34] R. G. Zhou and B. NO. Q. A. Yi. 2(2):168–180. C.This article has been accepted for publication in a future issue of this journal. H. Li. M. Transactions on Parallel and Distributed Systems. Z. [7] K. Lin. Communications. L. Opor: enabling 107. A. IEEE Transactions on Computer-Aided Design of Integrated X. C. Zhang. R. Gai and S. Gai. 2015. Multimedia. Vidyasagar. Cheung. Wang. Guo. IEEE Multimedia. and F. Chi. Victoria. Qiu. Panda. Y. Jie. Versteeg. IEEE Transactions on Cloud Computing. J. 2014. Q. IEEE [9] L. 2014. Efficient two-dimensional data allocation in IEEE 802. Nicolau. and C. X. Zhu. 5(4):525–539. S. auditing for securing big data storage in cloud computing. [6] N. Personal use is permitted. Buyya. 2014. Hu. and Z. IEEE Transactions on Cloud [16] M. XX. and W. IEEE. 2014. J. Morikawa. A data-driven paradigm for mapping problems. Ramı́rez. IEEE Transactions on Multimedia. Zhang. Cicconetti. Zhang. Prakash. Parallel Computing. Espadas. Aguado. J. [25] M. Vujic. M. but republication/redistribution requires IEEE permission. on Services Computing. Chen. Wong. Xue. neous workloads in data centers. Towards cloud computing: a literature review on [28] K. Buyya. Wang.2016. Future Generation [11] X. 2014. K.2594172. M. Content may change prior to final publication. M. IEEE Trans- [14] Z.org/publications_standards/publications/rights/index. 5(3):682–704. R. Song. Zhang.16 estimation by using personal dietary tendencies in a multimedia food OFDMA. 6(1):116–129. Qiu. [24] J. Hu. 64(12):3528 – 3540. IEEE Transactions on Knowledge and data Engineering. PP:1. Li. and devices. scheduling of bag-of-tasks workflows on hybrid clouds. Rao. Gao. J. 15(8):2176–2185. 29(1):273–286. Gai. Y. Wang. K. [41] C. [23] J. Monaci. See http://www. Tao. C. Future Generation Computer Systems. IEEE Transactions on Computers. and Z. 2013. Lin. Paris. A two-tiered on-demand resource perspective. 2012. Fu. Y. Zhu. Mingozzi. and H. Sha. and M. Lodi. 2015. 2014. Authorized public auditing of dynamic big data storage on 2015. J. 2014. [37] R. IEEE Transactions on Very Large Scale Integration [17] S. Lou. on Cloud Computing. Computer Systems. 2015. IEEE Transactions on Services Computing. X. A. M. Australia. Hwang. Andrzejak. T. Data mining: an overview from a database [4] Y. Hu. for Web applications in the cloud. Kondo. Rui. 2012. on High Performance Computing and Communications. 25(9):2234–2244. Yu. 2(1):29–42. Xie and X. IEEE Trans- [13] M. L. and A. Qiu. and L. G. Click prediction for web image rerank- An optimized computational model for multi-community-cloud social ing using multimodal sparse coding. In Proceedings of the 2015 ACM SIGMOD for scratch-pad memory on embedded multi-core systems. 2014. heterogeneity-aware resource provisioning in the cloud. D. Adaptive workflow Computing. Y. [10] C. Li. and C. [21] P. 2014. 16(1):159–168. Qiu. [40] P. B. L. 2000. K. Real-time tures. J. Qiu. Khan. Jiménez. . IEEE 17th International Conference on High Performance Computing and Transactions on Automatic Control. Pierre. pointing and migration on Amazon cloud spot instances. Boutaba.html for more information. pages optimization for hybrid scratch pad memory with SRAM and non- 574–579. 2013. Mining and forecasting [5] Y. on Multimedia Information Networking and Security. J. Ramirez. and E. and memory. E. Wu. L. Chen. Transformation-based monetary cost optimiza- [8] J. Data transfer scheduling for maximizing throughput 2012. pages 919–922. N. On-chip vs. Sha. F. 2013. Int’l Conf. Yu. [29] A. 2014. Matsubara. Luo. 32(6):809–817. Cabarcas. Y. J. IEEE Transactions on Image collaboration. S. of big-data computing in cloud systems. Optimal data allocation of big time-series data. Tao. Y. Chen. Enabling secure and mization. the data partitioning problem in embedded processor-based systems. Ranjan. efficient ranked keyword search over outsourced cloud data. CloudTPS: Scalable transactions actions on Cloud Computing. X. 2011. and R. robust. Circuits and Systems. and X. Zhang. J. and H. 23(5):2019–2032. K. 2014. 17(9):1417–1428. and E. 8(1):65–78. Dynamic remote data actions on Cloud Computing. 21(6):1094–1102. and Y. IEEE Transactions on Computers. W. Liu. X. allocation mechanism for vm-based data centers. Li. N. Software-as-a-Service applications over cloud computing infrastruc. and B. and C. Thuraisingham. Monetary cost-aware check. Phase-change compression in wireless multimedia sensor networks. IEEE Transactions on Cloud D. In 2015 IEEE sets of different cardinalities and applications to order reduction. Exploiting click constraints and multi-view and E. Conf. Qin. Tao. 2012. Zhu. volatile memory. Data placement Transactions on Services Computing. IEEE/ACM Transactions on Networking. L. Chen. Hao. 2013. Chen. USA. Alcaraz and J. Zhuge. and P. L. XX. J. Lenzini. off-chip memory: 62(11):2155–2168. Yang. W. Wei. VOL. A data-oriented method for scheduling [42] M. Melbourne. Sakurai. Chen. Dynamic Sciences. and memory exclusive guest OS fingerprinting. J. and tions for workflows in the cloud. 3(2):156–168. and duplication for embedded multicore systems with scratch pad [18] J. G. Tan. Li. 2012. and E. G. PP(99):1. 2014. Jia. C. H. Martorell. K. Sun. Wu. 2015. L. Li. Zhao. Zhang. pages Journal of Network and Computer Applications. Yin. Citation information: DOI 10. Han. 3(2):195–205. Advances in memory optimization for green cloud with genetic algorithm. Yi. IEEE Transactions on Services [36] F. Data allocation Conf. W. 2(1):14–28. In IEEE Int’l [38] J. Tseng. Cost-aware cooperative resource provisioning for heteroge. F. 59:46–54. M. Liu. Concha. and X. S. Gai. China. Connection discovery using big ACM Transactions on Design Automations of Electronic Systems data of user-shared images in social media. Liu. IEEE [39] Y. Shi. and D. Yang. Chen. In 2011 International Conference on Management of Data. 2168-7161 (c) 2016 IEEE. Cao. and Z. IEEE user-centric secure data scheme using attribute-based semantic access Transactions on Services Computing. 2015. provisioning for hosted Software-as-a-Service applications in cloud [31] Y. Li. IEEE Transactions 7(3):465–485. IEEE Transactions on Knowledge and Data Engineering. and [20] K. Wang. A. Prodan. IEEE Transactions on Multimedia. [26] Y. 2015.

Master your semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master your semester with Scribd & The New York Times

Cancel anytime.