Cloud computing is a latest new computing
paradigm where applications, data and IT services are provided
over the Internet. In cloud computing, load balancing is
required to distribute the dynamic local workload evenly across
all the nodes. It helps to achieve a high user satisfaction and
resource utilization ratio by ensuring an efficient and fair
allocation of every computing resource. But load balancing is a
key issue. It would consume a lot of cost to maintain load
information, since the system is too huge to timely disperse
load. This paper presents a hybrid control strategy for load
balancing. If a job is of high priority it will have to wait until
and unless a job which is getting executed gets over. This may
lead to a delay which is definitely going to affect the final
result. So, Hybrid approach for load balancing in virtual
environment using FCFS, RBAC, Round Robin, & Priority
queue which are going to reduce the burden of the executor.
Original Title
A Sophisticated Approach for Job Scheduling in
Cloud Server
Cloud computing is a latest new computing
paradigm where applications, data and IT services are provided
over the Internet. In cloud computing, load balancing is
required to distribute the dynamic local workload evenly across
all the nodes. It helps to achieve a high user satisfaction and
resource utilization ratio by ensuring an efficient and fair
allocation of every computing resource. But load balancing is a
key issue. It would consume a lot of cost to maintain load
information, since the system is too huge to timely disperse
load. This paper presents a hybrid control strategy for load
balancing. If a job is of high priority it will have to wait until
and unless a job which is getting executed gets over. This may
lead to a delay which is definitely going to affect the final
result. So, Hybrid approach for load balancing in virtual
environment using FCFS, RBAC, Round Robin, & Priority
queue which are going to reduce the burden of the executor.
Cloud computing is a latest new computing
paradigm where applications, data and IT services are provided
over the Internet. In cloud computing, load balancing is
required to distribute the dynamic local workload evenly across
all the nodes. It helps to achieve a high user satisfaction and
resource utilization ratio by ensuring an efficient and fair
allocation of every computing resource. But load balancing is a
key issue. It would consume a lot of cost to maintain load
information, since the system is too huge to timely disperse
load. This paper presents a hybrid control strategy for load
balancing. If a job is of high priority it will have to wait until
and unless a job which is getting executed gets over. This may
lead to a delay which is definitely going to affect the final
result. So, Hybrid approach for load balancing in virtual
environment using FCFS, RBAC, Round Robin, & Priority
queue which are going to reduce the burden of the executor.
A Sophisticated Approach for J ob Scheduling in Cloud Server Amandeep Kaur Sidhu 1 , Supriya Kinger 2
1 Research Fellow, 2 Asst. Professor 1,2 Sri Guru Granth Sahib World University,Fatehgarh Sahib,Punjab.
Abstract Cloud computing is a latest new computing paradigm where applications, data and IT services are provided over the Internet. In cloud computing, load balancing is required to distribute the dynamic local workload evenly across all the nodes. It helps to achieve a high user satisfaction and resource utilization ratio by ensuring an efficient and fair allocation of every computing resource. But load balancing is a key issue. It would consume a lot of cost to maintain load information, since the system is too huge to timely disperse load. This paper presents a hybrid control strategy for load balancing. If a job is of high priority it will have to wait until and unless a job which is getting executed gets over. This may lead to a delay which is definitely going to affect the final result. So, Hybrid approach for load balancing in virtual environment using FCFS, RBAC, Round Robin, & Priority queue which are going to reduce the burden of the executor.
Cloud computing provides easy access to high performance computing and storage infrastructure through web services. It provides massive scalability, reliability and configurability along with high performance. The cost of running an application on a cloud depends on the computation and the storage resources that are consumed. The flexibility of cloud computing is a function of the allocation of resources on demand. It provides secure, quick, convenient data storage and computing power with the help of internet. Virtualization, distribution and dynamic extendibility are the basic characteristics of cloud computing [1]. Nowadays most software and hardware have provided support to virtualization. Many virtualized factors such as IT resource, hardware, software, operating system and net storage, can be managed in the cloud computing platform; every environment has nothing to do with the physical platform. To make effective use of the tremendous capabilities of the cloud efficient scheduling algorithms are required. These scheduling algorithms are commonly applied by cloud resource manager to optimally dispatch tasks to the cloud resources. There are relatively a large number of scheduling algorithms to minimize the total completion time of the tasks in distributed systems [2]. This type of scheduling algorithms try to minimize the overall completion time of the tasks by finding the most suitable resources to be allocated to the tasks. It should be noticed that minimizing the overall completion time of the tasks does not necessarily result in the minimization of execution time of each individual task. The proposed system focuses on hybrid approach in load balancing for high performance.
II. LOAD BALANCING
Load Balancing is a method to distribute workload across one or more servers, network interfaces, hard drives, or other computing resources. Typical data centre implementations rely on large, powerful (and expensive) computing hardware and network infrastructure, which are subject to the usual risks associated with any physical device, including hardware failure, power and/or network interruptions, and resource limitations in times of high demand. Load balancing in the cloud differs from classical thinking on load-balancing architecture and implementation by using commodity servers to perform the load balancing. This provides for new opportunities and economies-of-scale, as well as presenting its own unique set of challenges. Load balancing is used to make sure that none of your existing resources are idle while others are being utilized. To balance load distribution, you can migrate the load from the source nodes (which have surplus workload) to the comparatively lightly loaded destination nodes. When you apply load balancing during runtime, it is called dynamic load balancing this can be realized both in a direct or iterative manner according to the execution node selection:
In the iterative methods, the final destination node is determined through several iteration steps. In the direct methods, the final destination node is selected in one step.
It is a process of reassigning the total load to the individual nodes of the collective system to make resource utilization effective and to improve the response time of the job, simultaneously removing a condition in which some of the nodes are over loaded while some others are under loaded. A load balancing algorithm which is dynamic in nature does not consider the previous state or behavior of the system, that is, it depends on the present behavior of the system. The important things to consider while developing such algorithms: estimation of load, comparison on of International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 7July 2013 ISSN: 2231-2803 http://www.ijcttjournal.org Page 2056
load, stability of different system, performance of system, interaction between the nodes, nature of work to be transferred, selecting of nodes and many other ones [3] . This load considered can be in terms of CPU load, amount of memory used, delay or Network load.
Goals of Load Balancing
The goals of load balancing are: To improve the performance substantially To have a backup plan in case the system fails even partially To maintain the system stability To accommodate future modification in the system
Types of Load Balancing Algorithms
Depending on who initiated the process, load balancing algorithms can be of three categories as given in [3]:
Sender Initiated: If the load balancing algorithm is initialised by the sender Receiver Initiated: If the load balancing algorithm is initiated by the receiver Symmetric: It is the combination of both sender initiated and receiver initiated
Depending on the current state of the system, load balancing algorithms can be divided into 2 categories as given in [3]:
Static: It doesnt depend on the current state of the system. Prior knowledge of the system is needed Dynamic: Decisions on load balancing are based on current state of the system. No prior knowledge is needed. So it is better than static approach.
Load Balancing Challenges in the Cloud Computing
Although cloud computing has been widely adopted. Research in cloud computing is still in its early stages, and some scientific challenges remain unsolved by the scientific community, particularly load balancing challenges [4].
Automated service provisioning: A key feature of cloud computing is elasticity, resources can be allocated or released automatically. How then can we use or release the resources of the cloud, by keeping the same performance as traditional systems and using optimal resources?
Virtual Machines Migration: With virtualization, an entire machine can be seen as a file or set of files, to unload a physical machine heavily loaded, it is possible to move a virtual machine between physical machines. The main objective is to distribute the load in a datacenter or set of datacenters. How then can we dynamically distribute the load when moving the virtual machine to avoid bottlenecks in Cloud computing systems?
Energy Management: The benefits that advocate the adoption of the cloud is the economy of scale. Energy saving is a key point that allows a global economy where a set of global resources will be supported by reduced providers rather that each one has its own resources. How then can we use a part of datacenter while keeping acceptable performance?
Stored data management: In the last decade data stored across the network has an exponential increase even for companies by outsourcing their data storage or for individuals, the management of data storage or for individuals, the management of data storage becomes a major challenge for cloud computing. How can we distribute the data to the cloud for optimum storage of data while maintaining fast access?
Emergence of small data centers for cloud computing: Small datacenters can be more beneficial, cheaper and less energy consumer than large datacenter. Small providers can deliver cloud computing services leading to geo-diversity computing. Load balancing will become a problem on a global scale to ensure an adequate response time with an optimal distribution of resources.
III. PROPOSED HYBRID APPROACH FOR LOAD BALANCING
As time factor increases, Load over the cloud also increases. It happens that one system which is using the resource is free and another is loaded with heavy task. The problem with the current scenario is that it doesnt specify anything about the job priority. So, This work focuses to implement the Hybrid Approach that helps to reduce the workload of the executor. In this model we use FCFS, Round Robin, Priority, & Job Mapping Policies.
3.1 Proposed Model
The proposed modal focuses on following three objectives which are helpful in reducing the executors load and are simulated by visual studio environment using Azure Cloud. a. To propose a new hybrid approach for load balancing. b. To implement job mapping policy with priority concept. c. Compare this approach with the current state of art techniques.
In this proposed work, Firstly admin create the jobs and specify their name, specification, consumption, & total energy. Admin also implement job mapping policy where International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 7July 2013 ISSN: 2231-2803 http://www.ijcttjournal.org Page 2057
restriction is made that one job cant be mapped with more than 2 jobs. The other part of the work done by scheduler. Scheduler selects the jobs & schedules them by using hybrid approach which is a combination of First Come First Serve (FCFS) & Priority algorithm and compare it with Round Robin also.
3.2 Basic Block Design
Load Balancing is a method to distribute workload across one or more servers, network interfaces, hard drives, or other computing resources. This proposed hybrid approach is a combination of Priority & FCFS.
Figure 1: Basic Block Design of Proposed Work
The hybrid technique is proposed to reduce the workload of the executor. The Flow chart in Figure 1 shows the working of the technique. Admin: In an organization admin creates the jobs and map the dependency between them & also specify the task name and their specifications.
Job Mapper: The job that a user can have the access is mapped with the other jobs to check the dependency. One job cant be mapped with more than 2 jobs. Scheduler: Scheduler executes the jobs, if the job is dependent on another job, then the job scheduler will check the priority of the dependent job, if the dependent job has higher priority then it will be executed. But if the job is dependent on more than one job and the priority of the jobs are same then according to FCFS scheduling algorithm the job which comes first will be executed first.
3.3 Algorithm level Design
Fig. 2 represents the flow of steps involved in algorithm level design of the Hybrid Approach.
Figure 2 Proposed Algorithmlevel Design In this proposed System, firstly check the Job Count (numbers of jobs submit for execution) and if it is equal to 1 then job executed with FCFS otherwise it passes to job mapper & check the mapping policy. If job satisfy the mapping policy then it executes with hybrid approach otherwise executed with FCFS.
IV. RESULTS
This proposed hybrid approach gives the best results in terms of load on the system and executor. Table 1 shows the comparison of FCFS, Round Robin, Priority and Hybrid using different parameters. It shows that the hybrid approach hold 2 jobs at the same time when others hold only one job using only single system.
Parameters Approach Used FCFS Round Robin Priority Hybrid No. of J obs Executed 1 1 2 2 Execution Time (in Sec) 16 16 16 16 Temperature (in F) 15 16 11 10 No. of Systems overloaded 1 1 2 1 International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 7July 2013 ISSN: 2231-2803 http://www.ijcttjournal.org Page 2058
Figure 3 shows the comparison of the algorithms and shows that hybrid is better than all other techniques as it gives better results means it reduces the load of the executor as well as the system.
Figure 3: Comparison of Load on Systems using (a) FCFS, (b) Hybrid, (c) Priority, (d) Round Robin
V. CONCLUSION & FUTURE SCOPE
In this paper, we proposed an efficient load balancing approach, which is Hybrid Approach, for the cloud computing network to reduce the load of executor. Similarly, Hybrid approach can achieve better load balancing and performance than FCFS, Round Robin, & Priority algorithm. It is expected that the job load will be reduced by implementing the Job Mapping Policy along with the FCFS and the Priority scheduling concept because the job mapping policy will restrict the number of jobs mapped with single job where as the priority scheduling will speed up the concept of execution.
In the future, if some mechanism would be applied so that one id can make a fixed number of transactions, then admin will come to know through its monitoring system that unauthorized access has been made and it would be easier to take action against such happenings and also if some concept could be applied over the priority as there is no such rule that if a job is of lower priority but a less execution time, it should proceed first can make the system go faster.
REFERENCES
[1] Mrs.S.Selvarani1; Dr.G.Sudha Sadhasivam, improved cost - based algorithmfor task scheduling in Cloud computing ,IEEE 2010. [2] Saeed Parsa and Reza Entezari-Maleki, RASA: A New Task Scheduling Algorithmin Grid Environment in World Applied Sciences J ournal 7 (Special Issue of Computer & IT): 152-160, 2009.Berry M. W., Dumais S. T., OBrien G. W. Using linear algebra for intelligent information retrieval, SIAM Review, 1995, 37, pp. 573-595. [3] Ali M. Alakeel, A Guide to Dynamic Load Balancing in Distributed Computer Systems, IJ CSNS International J ournal of Computer Science and Network Security,VOL.10 No.6, J une 2010. [4] A. Khiyaita, M. Zbakh, H. El Bakkali and Dafir El Kettani, Load Balancing Cloud Computing: State of Art , 9778-1-4673-1053- 6/12/$31.00, 2012 IEEE. [5] Lijun Mei, W.K. Chan, and T.H. Tse; A Tale of Clouds: Paradigm Comparisons and Some Thoughts on Research Issues; To appear in Proceedings of the 2008 IEEE Asia-Pacific Services Computing Conference (APSCC 2008), IEEE Computer Society Press, Los Alamitos, CA (2008). [6] Byron Ludwig and Serena Coetzee; A COMPARISON OF PLATFORM AS A SERVICE (PAAS) CLOUDS WITH A DETAILED REFERENCE TO SECURITY AND GEOPROCESSING SERVICES. [7] Peter Mell, Timothy Grance; The NIST Definition of Cloud Computing; NIST Special Publication 800-145. [8] http://www.thebeckon.com/pros-and-cons-of-cloud-computing/ [9] Zenon Chaczko, Venkatesh Mahadevan, Shahrzad Aslanzadeh and Christopher Mcdermid; Availability and Load Balancing in Cloud Computing; 2011 International Conference on Computer and Software Modeling IPCSIT vol.14 (2011) (2011) IACSIT Press, Singapore . [10] A. Verma, P. Ahuja, and A. Neogi, pMapper: Power and Migration Cost Aware Application Placement in Virtualized Systems, in Proceedings of the 9th ACM/ IFIP/ USENIX International Conference on Middleware. New York, NY, USA: Springer-Verlag New York, Inc., 2008. [11] Christoph Kleineweber, Axel Keller, Oliver Niehorster, and Andre Brinkmann, Rule- Based Mapping of Virtual Machines in Clouds, in Proceedings of the 19 th International Euromicro Conference on Parallel, Distributed and Network-Based Processing, Paderborn Center for Parallel Computing, Universitat Paderborn, Germany, 2011. [12] C. Clark, K. Fraser, S. Hand, J . G. Hansen, E. J ul, C. Limpach, I. Pratt, and A. Warfield, Live migration of virtual machines, in Proceedings of the 2nd Symposiumon Networked Systems Design & Implementation (NSDIS). Berkeley, CA, USA: USENIX Association, 2005. [13] J inhua Hu, J ianhua Gu, Guofei Sun, Tianhai Zhao, A Scheduling Strategy on Load Balancing of Virtual Machine Resources in Cloud Computing Environment, in Proceedings of the 3rd International Symposium on Parallel Architectures, Algorithms and Programming [14] R. N. Calheiros, R. Buyya, and C. A. F. De Rose, A heuristic for mapping virtual machines and links in emulation testbeds, in Proceedings of the 9th International Conference on Parallel Processing (ICPP). Washington, DC, USA: IEEE Computer Society, 2009, pp. 518 525. [15] J unjie Ni, Yuanqiang Huang, Zhongzhi Luan, J uncheng Zhang and Depei Qian; Virtual Machine Mapping Policy Based on Load Balancing in Private Cloud Environment; 2011 International Conference on Cloud and Service Computing 978-1-4577-1637-9/11/$26.00@2011 IEEE. [16] Martin Randles, David Lamb, A. Taleb-Bendiab, A Comparative Study into Distributed Load Balancing Algorithms for Cloud Computing, 2010 IEEE 24th International Conference on Advanced Information Networking and Applications Workshops.