You are on page 1of 75

A Profit Maximization Scheme with

Guaranteed
Quality of Service in Cloud Computing
ABSTRACT

An effective and efficient way to provide computing resources and services to


customers on demand, cloud computing has become more and more popular. From
cloud service providers’ perspective, profit is one of the most important
considerations, and it is mainly determined by the configuration of a cloud service
platform under given market demand. However, a single long-term renting scheme is
usually adopted to configure a cloud platform, which cannot guarantee the service
quality but leads to serious resource waste. In this paper, a double resource renting
scheme is designed firstly in which short-term renting and long-term renting are
combined aiming at the existing issues. This double renting scheme can effectively
guarantee the quality of service of all requests and reduce the resource waste greatly.
Secondly, a service system is considered as an M/M/m+D queuing model and the
performance indicators that affect the profit of our double renting scheme are
analyzed, e.g., the average charge, the ratio of requests that need temporary servers,
and so forth. Thirdly, a profit maximization problem is formulated for the double
renting scheme and the optimized configuration of a cloud platform is obtained by
solving the profit maximization problem. Finally, a series of calculations are
conducted to compare the profit of our proposed scheme with that of the single renting
scheme. The results show that our scheme can not only guarantee the service quality
of all requests, but also obtain more profit than the latter.
LIST OF CONTENTS

Page No

List of Figures viii


List of Tables ix
1. Introduction

1.1 Purpose

1.2 Scope

1.3 Motivation

2. Fundamental Concepts on (Domain)

3.1 Domain Fundamentals & Description

3.1 Existing concepts of fundamentals

3.2 Existing System Algorithms

3.3 Proposed System Fundamentals concepts

3.4 Proposed Algorithms

3.5 Performance analysis in between of existing system and proposed system

3. System Analysis

4.1 Existing System

4.1.1 Drawbacks

4.2 Problem statement

4.3 Proposed System

4.3.1 Advantages

4.4 Modules Description

4.5 Feasibility Study

4.5.1 Economic Feasibility

4.5.2 Operational Feasibility


4.5.3 Technical Feasibility

4. System Requirements Specification

5.1 Introduction

5.2 Purpose

5.3 Functional Requirements

5.4 Non Functional Requirements

5.5 Hardware Requirements

5.6 Software Requirements

5. System Design

6.1 System Specifications

6.2 System Components

6.3 DFD’s

6.4 UML Diagrams

6.5 Data Dictionaries and ER Diagram

6. Implementation

7.1 Technology Description

7. System Testing

7.1 Testing Methodologies

7.2 Test cases

8. Conclusion and Future Enhancements

8.1 Conclusion

10. References

Appendix:

1. Sample code
2. screenshots
1 Introduction

1.1 Purpose
As an effective and efficient way to consolidate computing resources and computing services,
cloudcomputing has become more and more popular .Cloud computing centralizes
management of resources and services, and delivers hosted services over the Internet. The
hardware, software, databases, information, and all resources are concentrated and provided
to consumers on-demand.

1.2 Scope

In this paper, we only consider the profit maximization problem in a homogeneous cloud
environment, because the analysis of a heterogenous environment is much more complicated
than that of a homogenous environment. However, we will extend our study to a
heterogenous environment in the future

1.3 Motivation
To configure a cloud service platform, a service providerusually adopts a single renting
scheme. That’s to say, the servers in the service system are all long-term rented. Because of
the limited number of servers, some of the incoming service requests cannot be processed
immediately. So they are first inserted into a queue until they can handled by any available
server. However, the waiting time of the service requests cannot be too long. In order to
satisfy quality-of-service requirements, the waiting time of each incoming service request
should be limited within a certain range, which is determined by a service-level agreement
(SLA). If the quality of service is guaranteed, the service is fully charged, otherwise, the
service provider serves the request for free as a penalty of low quality. To obtain higher
revenue, a service provider should rent more servers from the infrastructure providers or scale
up the server execution speed to ensure that more service requests are processed
with high service quality.
2. Fundamental Concepts on (Domain)
2.1 Introduction

Cloud computing is the use of computing resources (hardware and software) that are
delivered as a service over a network (typically the Internet). The name comes from the
common use of a cloud-shaped symbol as an abstraction for the complex infrastructure it
contains in system diagrams. Cloud computing entrusts remote services with a user's data,
software and computation. Cloud computing consists of hardware and software resources
made available on the Internet as managed third-party services. These services typically
provide access to advanced software applications and high-end networks of server computers.

Structure of cloud computing

How Cloud Computing Works?

The goal of cloud computing is to apply traditional supercomputing, or high-performance


computing power, normally used by military and research facilities, to perform tens of
trillions of computations per second, in consumer-oriented applications such as financial
portfolios, to deliver personalized information, to provide data storage or to power large,
immersive computer games.
The cloud computing uses networks of large groups of servers typically running low-cost
consumer PC technology with specialized connections to spread data-processing chores
across them. This shared IT infrastructure contains large pools of systems that are linked
together. Often, virtualization techniques are used to maximize the power of cloud
computing.

2.2 Characteristics and Services Models:

The salient characteristics of cloud computing  based on the definitions provided


by the National Institute of Standards and Terminology (NIST) are outlined below:

 On-demand self-service: A consumer can unilaterally provision computing


capabilities, such as server time and network storage, as needed automatically without
requiring human interaction with each service’s provider.
 Broad network access: Capabilities are available over the network and accessed
through standard mechanisms that promote use by heterogeneous thin or thick client
platforms (e.g., mobile phones, laptops, and PDAs).
 Resource pooling: The provider’s computing resources are pooled to serve multiple
consumers using a multi-tenant model, with different physical and virtual resources
dynamically assigned and reassigned according to consumer demand. There is a sense
of location-independence in that the customer generally has no control or knowledge
over the exact location of the provided resources but may be able to specify location
at a higher level of abstraction (e.g., country, state, or data center). Examples of
resources include storage, processing, memory, network bandwidth, and virtual
machines.
 Rapid elasticity: Capabilities can be rapidly and elastically provisioned, in some
cases automatically, to quickly scale out and rapidly released to quickly scale in. To
the consumer, the capabilities available for provisioning often appear to be unlimited
and can be purchased in any quantity at any time.
 Measured service: Cloud systems automatically control and optimize resource use by
leveraging a metering capability at some level of abstraction appropriate to the type of
service (e.g., storage, processing, bandwidth, and active user accounts). Resource
usage can be managed, controlled, and reported providing transparency for both the
provider and consumer of the utilized service.
Characteristics of cloud computing

 2.3 Services Models:

  Cloud Computing comprises three different service models, namely Infrastructure-as-


a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS). The three
service models or layer are completed by an end user layer that encapsulates the end user
perspective on cloud services. The model is shown in figure below. If a cloud user accesses
services on the infrastructure layer, for instance, she can run her own applications on the
resources of a cloud infrastructure and remain responsible for the support, maintenance, and
security of these applications herself. If she accesses a service on the application layer, these
tasks are normally taken care of by the cloud service provider.
Structure of service models

Benefits of cloud computing:

1. Achieve economies of scale – increase volume output or productivity with fewer


people. Your cost per unit, project or product plummets.
2. Reduce spending on technology infrastructure. Maintain easy access to your
information with minimal upfront spending. Pay as you go (weekly, quarterly or
yearly), based on demand.
3. Globalize your workforce on the cheap. People worldwide can access the cloud,
provided they have an Internet connection.
4. Streamline processes. Get more work done in less time with less people.
5. Reduce capital costs. There’s no need to spend big money on hardware, software or
licensing fees.
6. Improve accessibility. You have access anytime, anywhere, making your life so
much easier!
7. Monitor projects more effectively. Stay within budget and ahead of completion
cycle times.
8. Less personnel training is needed. It takes fewer people to do more work on a cloud,
with a minimal learning curve on hardware and software issues.
9. Minimize licensing new software. Stretch and grow without the need to buy
expensive software licenses or programs.
10. Improve flexibility. You can change direction without serious “people” or
“financial” issues at stake.

Advantages:

1. Price: Pay for only the resources used.


2. Security: Cloud instances are isolated in the network from other instances for
improved security.
3. Performance: Instances can be added instantly for improved performance. Clients
have access to the total resources of the Cloud’s core hardware.
4. Scalability: Auto-deploy cloud instances when needed.
5. Uptime: Uses multiple servers for maximum redundancies. In case of server failure,
instances can be automatically created on another server.
6. Control: Able to login from any location. Server snapshot and a software library lets
you deploy custom instances.
7. Traffic: Deals with spike in traffic with quick deployment of additional instances to
handle the load.

3. SYSTEM ANALYSIS
The Systems Development Life Cycle (SDLC), or Software Development Life
Cycle in systems engineering, information systems and software engineering, is the process
of creating or altering systems, and the models and methodologies that people use to develop
these systems.

In software engineering the SDLC concept underpins many kinds of software development
methodologies. These methodologies form the framework for planning and controlling the
creation of an information system the software development process.

SOFTWARE MODEL OR ARCHITECTURE ANALYSIS:

Structured project management techniques (such as an SDLC) enhance


management’s control over projects by dividing complex tasks into manageable sections. A
software life cycle model is either a descriptive or prescriptive characterization of how
software is or should be developed. But none of the SDLC models discuss the key issues like
Change management, Incident management and Release management processes within the
SDLC process, but, it is addressed in the overall project management. In the proposed
hypothetical model, the concept of user-developer interaction in the conventional SDLC
model has been converted into a three dimensional model which comprises of the user, owner
and the developer. In the proposed hypothetical model, the concept of user-developer
interaction in the conventional SDLC model has been converted into a three dimensional
model which comprises of the user, owner and the developer. The ―one size fits all‖
approach to applying SDLC methodologies is no longer appropriate. We have made an
attempt to address the above mentioned defects by using a new hypothetical model for SDLC
described elsewhere. The drawback of addressing these management processes under the
overall project management is missing of key technical issues pertaining to software
development process that is, these issues are talked in the project management at the surface
level but not at the ground level.
WHAT IS SDLC?

A software cycle deals with various parts and phases from planning to testing
and deploying software. All these activities are carried out in different ways, as per the needs.
Each way is known as a Software Development Lifecycle Model (SDLC). A software life
cycle model is either a descriptive or prescriptive characterization of how software is or
should be developed. A descriptive model describes the history of how a particular software
system was developed. Descriptive models may be used as the basis for understanding and
improving software development processes or for building empirically grounded prescriptive
models.
SDLC models * The Linear model (Waterfall) - Separate and distinct phases of
specification and development. - All activities in linear fashion. - Next phase starts only when
first one is complete. * Evolutionary development - Specification and development are
interleaved (Spiral, incremental, prototype based, Rapid Application development). -
Incremental Model (Waterfall in iteration), - RAD(Rapid Application Development) - Focus
is on developing quality product in less time, - Spiral Model - We start from smaller module
and keeps on building it like a spiral. It is also called Component based development. *
Formal systems development - A mathematical system model is formally transformed to an
implementation. * Agile Methods. - Inducing flexibility into development. * Reuse-based
development - The system is assembled from existing components.
The General Model
Software life cycle models describe phases of the software cycle and the order in which those
phases are executed. There are tons of models, and many companies adopt their own, but all
have very similar patterns. Each phase produces deliverables required by the next phase in
the life cycle. Requirements are translated into design. Code is produced during
implementation that is driven by the design. Testing verifies the deliverable of the
implementation phase against requirements.
SDLC Methodology:

Spiral Model

The spiral model is similar to the incremental model, with more emphases placed on
risk analysis.  The spiral model has four phases: Planning, Risk Analysis, Engineering and
Evaluation.  A\ software project repeatedly passes through these phases in iterations (called
Spirals in this model).  The baseline spiral, starting in the planning phase, requirements is
gathered and risk is assessed.  Each subsequent spirals builds on the baseline spiral.
Requirements are gathered during the planning phase.  In the risk analysis phase, a process is
undertaken to identify risk and alternate solutions.  A prototype is produced at the end of the
risk analysis phase. Software is produced in the engineering phase, along with testing at
the end of the phase.  The evaluation phase allows the customer to evaluate the output of the
project to date before the project continues to the next spiral. In the spiral model, the angular
component represents progress, and the radius of the spiral represents cost. Spiral Life Cycle
Model.

This document play a vital role in the development of life cycle (SDLC) as it describes
the complete requirement of the system. It means for use by developers and will be the basic
during testing phase. Any changes made to the requirements in the future will have to go
through formal change approval process.

SPIRAL MODEL was defined by Barry Boehm in his 1988 article, “A spiral Model of
Software Development and Enhancement. This model was not the first model to discuss
iterative development, but it was the first model to explain why the iteration models.

As originally envisioned, the iterations were typically 6 months to 2 years long. Each
phase starts with a design goal and ends with a client reviewing the progress thus far.
Analysis and engineering efforts are applied at each phase of the project, with an eye toward
the end goal of the project.
The steps for Spiral Model can be generalized as follows:

 The new system requirements are defined in as much details as possible. This
usually involves interviewing a number of users representing all the external or
internal users and other aspects of the existing system.

 A preliminary design is created for the new system.

 A first prototype of the new system is constructed from the preliminary design.
This is usually a scaled-down system, and represents an approximation of the
characteristics of the final product.

 A second prototype is evolved by a fourfold procedure:

1. Evaluating the first prototype in terms of its strengths, weakness, and risks.

2. Defining the requirements of the second prototype.

3. Planning an designing the second prototype.

4. Constructing and testing the second prototype.

 At the customer option, the entire project can be aborted if the risk is deemed too
great. Risk factors might involved development cost overruns, operating-cost
miscalculation, or any other factor that could, in the customer’s judgment, result
in a less-than-satisfactory final product.

 The existing prototype is evaluated in the same manner as was the previous
prototype, and if necessary, another prototype is developed from it according to
the fourfold procedure outlined above.

 The preceding steps are iterated until the customer is satisfied that the refined
prototype represents the final product desired.

 The final system is constructed, based on the refined prototype.

 The final system is thoroughly evaluated and tested. Routine maintenance is


carried on a continuing basis to prevent large scale failures and to minimize down
time.
Fig -Spiral Model

Advantages

 High amount of risk analysis


 Good for large and mission-critical projects.
 Software is produced early in the software life cycle.

3.1 ExistingSystem

In Many existing research they only consider the power consumption cost. As a major
difference between their models and ours, the resource rental cost is considered in this paper
as well, since it is a major part which affects the profit of service providers. The traditional
single resource renting scheme cannot guarantee the quality of all requests but wastes a great
amount of resources due to the uncertainty of system workload. To overcome the weakness,
we propose a double renting scheme as follows, which not only can guarantee the quality of
service completely but also can reduce the resource waste greatly.

3.1.1 Disadvantages

1. Cannot guarantee maximum profit.


2. Resource wastage
3. Uncertainity of System Workload

3.2 Proposed System


In this section, we first propose the Double-Quality- Guaranteed (DQG) resource renting
scheme which combines long-term renting with short-term renting. The main computing
capacity is provided by the long-term rented servers due to their low price. The short-term
rented servers provide the extra capacity in peak period

3.2.2 Advantages

In proposed system we are using the Double-Quality-Guaranteed (DQG) renting scheme


can achieve more profit than the compared Single-Quality-Unguaranteed (SQU) renting
scheme in the premise of guaranteeing the service quality completely.

3.3 Modules Description

1. Cloud computing,
2. queuing model.
3. Business Service Module
4. Cloud customer Module.
5. Infrastructure Service Provider Module.

Cloud Computing
Cloud computing describes a type of outsourcing of computer services, similar to the way in
which the supply of electricity is outsourced. Users can simply use it. They do not need to
worry where the electricity is from, how it is made, or transported. Every month, they pay for
what they consumed. The idea behind cloud computing is similar: The user can simply use
storage, computing power, or specially crafted development environments, without having to
worry how these work internally. Cloud computing is usually Internet-based computing. The
cloud is a metaphor for the Internet based on how the internet is described in computer
network diagrams; which means it is an abstraction hiding the complex infrastructure of the
internet. It is a style of computing in which IT-related capabilities are provided “as a service”,
allowing users to access technology-enabled services from the Internet ("in the
cloud")without knowledge of, or control over the technologies behind these servers.

Queuing model:

we consider the cloud service platform as a multiserver system with a service request queue.
The clouds provide resources for jobs in the form of virtual machine (VM). In addition, the
users submit their jobs to the cloud in which a job queuing system such as SGE, PBS, or
Condor is used. All jobs are scheduled by the job scheduler and assigned to different VMs in
a centralized way. Hence, we can consider it as a service request queue. For example, Condor
is a specialized workload management system for computeintensive jobs and it provides a job
queueing mechanism, scheduling policy, priority scheme, resource monitoring, and resource
management. Users submit their jobs to Condor, and Condor places them into a queue,
chooses when and where to run them based upon a policy. An M/M/m+D queueing model is
build for our multiserver system with varying system size. And then, an optimal
configuration problem of profit maximization is formulated in which many factors are taken
into considerations, such as the market demand, the workload of requests, the server-level
agreement, the rental cost of servers, the cost of energy consumption, and so forth. The
optimal solutions are solved for two different situations, which are the ideal optimal solutions
and the actual optimal solutions.

Business Service Providers Module

Service providers pay infrastructure providers for renting their physical resources, and charge
customers for processing their service requests, which generates cost and revenue,
respectively. The profit is generated from the gap between the revenue and the cost.In this
module the service providers considered as cloud brokers because they can play an important
role in between cloud customers and infrastructure providers ,and he can establish an indirect
connection between cloud customer and infrastructure providers.

Infrastructure Service Provider Module

In the three-tier structure, an infrastructure provider the basic hardware and software
facilities. A service provider rents resources from infrastructure providers and prepares, a set
of services in the form of virtual machine (VM). Infrastructure providers provide two kinds
of resource renting schemes, e.g., long-term renting and short-term renting. In general, the
rental price of long-term renting is much cheaper than that of short-term renting.

Cloud Customers

A customer submits a service request to a service provider which delivers services on


demand. The customer receives the desired result from the service provider with certain
service-level agreement, and pays for the service based on the amount of the service and the
service quality

3.4 FEASIBILITY STUDY

The feasibility of the project is analyzed in this phase and business proposal is put forth with
a very general plan for the project and some cost estimates. During system analysis the
feasibility study of the proposed system is to be carried out. This is to ensure that the
proposed system is not a burden to the company. For feasibility analysis, some
understanding of the major requirements for the system is essential.

Three key considerations involved in the feasibility analysis are

 ECONOMICAL FEASIBILITY
 TECHNICAL FEASIBILITY
 SOCIAL FEASIBILITY

ECONOMICAL FEASIBILITY
This study is carried out to check the economic impact that the system will have on the
organization. The amount of fund that the company can pour into the research and
development of the system is limited. The expenditures must be justified. Thus the developed
system as well within the budget and this was achieved because most of the technologies
used are freely available. Only the customized products had to be purchased.

TECHNICAL FEASIBILITY

This study is carried out to check the technical feasibility, that is, the technical requirements
of the system. Any system developed must not have a high demand on the available technical
resources. This will lead to high demands on the available technical resources. This will lead
to high demands being placed on the client. The developed system must have a modest
requirement, as only minimal or null changes are required for implementing this system.

SOCIAL FEASIBILITY

The aspect of study is to check the level of acceptance of the system by the user. This
includes the process of training the user to use the system efficiently. The user must not feel
threatened by the system, instead must accept it as a necessity. The level of acceptance by the
users solely depends on the methods that are employed to educate the user about the system
and to make him familiar with it. His level of confidence must be raised so that he is also able
to make some constructive criticism, which is welcomed, as he is the final user of the system.

4 System Requirements Specification

4.1 Introduction
A Software Requirements Specification (SRS) – a requirements specification
for a software system – is a complete description of the behavior of a system to be developed.
It includes a set of use cases that describe all the interactions the users will have with the
software. In addition to use cases, the SRS also contains non-functional requirements. Non-
functional requirements are requirements which impose constraints on the design or
implementation (such as performance engineering requirements, quality standards, or design
constraints).
System requirements specification: A structured collection of information that embodies
the requirements of a system. A business analyst, sometimes titled system analyst, is
responsible for analyzing the business needs of their clients and stakeholders to help identify
business problems and propose solutions. Within the systems development life cycle domain,
typically performs a liaison function between the business side of an enterprise and the
information technology department or external service providers. Projects are subject to three
sorts of requirements:
 Business requirements describe in business terms what must be delivered or
accomplished to provide value.
 Product requirements describe properties of a system or product (which could be
one of
several ways to accomplish a set of business requirements.)
 Process requirements describe activities performed by the developing organization.
For instance, process requirements could specify specific methodologies that must be
followed, and constraints that the organization must obey.
Product and process requirements are closely linked. Process requirements often specify the
activities that will be performed to satisfy a product requirement. For example, a maximum
development cost requirement (a process requirement) may be imposed to help achieve a
maximum sales price requirement (a product requirement); a requirement that the product be
maintainable (a Product requirement) often is addressed by imposing requirements to follow
particular development styles

4.2 PURPOSE

In systems engineering, a requirement can be a description of what a system must do,


referred to as a Functional Requirement. This type of requirement specifies something that the
delivered system must be able to do. Another type of requirement specifies something about the
system itself, and how well it performs its functions. Such requirements are often called Non-
functional requirements, or 'performance requirements' or 'quality of service requirements.'
Examples of such requirements include usability, availability, reliability, supportability, testability
and maintainability.

A collection of requirements define the characteristics or features of the desired system. A 'good' list
of requirements as far as possible avoids saying how the system should implement the
requirements, leaving such decisions to the system designer. Specifying how the system should be
implemented is called "implementation bias" or "solution engineering". However, implementation
constraints on the solution may validly be expressed by the future owner, for example for required
interfaces to external systems; for interoperability with other systems; and for commonality (e.g. of
user interfaces) with other owned products.

In software engineering, the same meanings of requirements apply, except that the focus of interest
is the software itself.

4.3 FUNCTIONAL REQUIREMENTS

1) Customer Registration

2)

4.4 NON FUNCTIONAL REQUIREMENTS

The major non-functional Requirements of the system are as follows

Usability
The system is designed with completely automated process hence there is no or less user
intervention.

Reliability
The system is more reliable because of the qualities that are inherited from the chosen
platform java. The code built by using java is more reliable.

Performance
This system is developing in the high level languages and using the advanced front-end and
back-end technologies it will give response to the end user on client system with in very less
time.
Supportability
The system is designed to be the cross platform supportable. The system is supported on a
wide range of hardware and any software platform, which is having JVM, built into the
system.
Implementation
The system is implemented in web environment using struts framework. The apache tomcat
is used as the web server and windows xp professional is used as the platform.
Interface the user interface is based on Struts provides HTML Tag

Software Requirements:
Language : JDK (1.7.0)
Frontend : JSP, HTML
Backend : Oracle10g
Operating System : windows 7
Server : ApacheTomcat 7
Hardware Requirements:
Processor : Pentium IV
Hard Disk : 80GB
RAM : 2GB

5 System Design
The purpose of the design phase is to plan a solution of the problem specified by the
requirement document. This phase is the first step in moving from the problem domain to the
solution domain. In other words, starting with what is needed, design takes us toward how to
satisfy the needs. The design of a system is perhaps the most critical factor affection the
quality of the software; it has a major impact on the later phase, particularly testing,
maintenance. The output of this phase is the design document. This document is similar to a
blueprint for the solution and is used later during implementation

Data Flow Diagram / Use Case Diagram / Flow Diagram

The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to
represent a system in terms of the input data to the system, various processing carried out on
these data, and the output data is generated by the system.

Data Flow Diagram:

(Cloud user)
cloud us er login

YES NO
Check

Upload files Un authorized User


into clou d

View Request
Status

View Policies

cho ose policy

send polic y req


to bro ker

User res ources End Process

(Broker):
BrokerLogin

Yes No
Check

View User Resource Unauthorized User


requests

View Infrastructure
services

Accept user requests

View graphModulations

check server
EndProcess
vapacities

(Infrastructure service provider:)


Server Login

Yes No
Check

Unauthorized User
Provide Resources

Accept Broker Requests

Accept Userrequests EndProcess

Component Diagram:

Cloud User:

Enter Username And Password


Userlogin
User

Upload files into


cloud

choose policies
and send to broker
View Status View policy
Terms

User:
Enter UserName And Password
Broker login
Broker

View user View Server send userreq View graph


Requests resources to Server modulations

(Infrastructure service provider:)


Enter expertName And Password
Server login
Server

logout
View user Maintain Users
Provide resources
requests

Use case Diagram:

(cloud user)
User

Login

upload files View policy choose policy send policy term


View status
into cloud terms terms to broker

(Broker):

Broker

Login

view user View server Accept user Send userpolicies View graph
requests capacities requests to server modulations
(Infrastructure service provider:)

Server

Login

Provide View user Accept Maintain


resources requests brokerreqts users

ACTIVITY DIAGRAM:

(Cloud user)

Userlogin

upload filesinto
cloud
View status
select policy
View policy and send requsts
terms

End Process
(Broker)

Broker login

View user
requests Accept user View user policy
requests terms

End Process

(Infrastructure service provider:)


Server login

provide Logout
resources
Accept user
requests View broker
requests

End Process

Sequence Diagram:

(Cloud user):

upload files Select policies and


user Login into clous View status View policy terms send to broker

Enter username and password

message

(Broker):
Login View user requests Accept user req View graph
Broker modulations

Enter UserId And Password

(Infrastructure service provider:)


Login Provide resources Accept broker
server requests Maintain users Logout

Enter expertname And Password

DatabaseTables
6 Implementation
6.1 Introduction
Implementation is the stage where the theoretical design is turned in to working
system. The most crucial stage is achieving a new successful system and in giving confidence
on the new system for the users that it will work efficiently and effectively.

The system can be implemented only after through testing is done and if it found to
work according to the specification. It involves careful planning, investigation of the current
system and its constraints on implementation, design of methods to achieve the change over
and an evaluation of change over methods a part from planning. Two major tasks of
preparing the implementation are education and training of the users and testing of the
system.

The more complex the system being implemented, the more involved will be the
systems analysis and design effort required just for implementation. The implementation
phase comprises of several activities. The required hardware and software acquisition is
carried out. The System may require some hardware and software acquisition is carried out.
The system may require some software to be developed. For this, programs are written and
tested. The user then changes over to his new fully tested system and the old system is
discontinued.

Implementation is the process of having systems personnel check out and put new
equipment in to use, train users, install the new application, and construct any files of data
needed to it.

Depending on the size of the organization that will be involved in using the
application and the risk associated with its use, system developers may choose to test the
operation in only one area of the firm, say in one department or with only one or two persons.
Sometimes they will run the old and new systems together to compare the results. In still
other situations, developers will stop using the old system one-day and begin using the new
one the next. As we will see, each implementation strategy has its merits, depending on the
business situation in which it is considered.
6.2 TechnologyDescription
Java Technology
Java technology is both a programming language and a platform.

The Java Programming Language


The Java programming language is a high-level language that can be characterized by all of
the following buzzwords:

 Simple
 Architecture neutral
 Object oriented
 Portable
 Distributed
 High performance
 Interpreted
 Multithreaded
 Robust
 Dynamic
 Secure

With most programming languages, you either compile or interpret a program so that you can
run it on your computer. The Java programming language is unusual in that a program is both
compiled and interpreted. With the compiler, first you translate a program into an
intermediate language called Java byte codes —the platform-independent codes interpreted
by the interpreter on the Java platform. The interpreter parses and runs each Java byte code
instruction on the computer. Compilation happens just once; interpretation occurs each time
the program is executed. The following figure illustrates how this works.
You can think of Java byte codes as the machine code instructions for the Java Virtual
Machine (Java VM). Every Java interpreter, whether it’s a development tool or a Web
browser that can run applets, is an implementation of the Java VM. Java byte codes help
make “write once, run anywhere” possible. You can compile your program into byte codes on
any platform that has a Java compiler. The byte codes can then be run on any implementation
of the Java VM. That means that as long as a computer has a Java VM, the same program
written in the Java programming language can run on Windows 2000, a Solaris workstation,
or on an iMac.

The Java Platform


A platform is the hardware or software environment in which a program runs. We’ve already
mentioned some of the most popular platforms like Windows 2000, Linux, Solaris, and
MacOS. Most platforms can be described as a combination of the operating system and
hardware. The Java platform differs from most other platforms in that it’s a software-only
platform that runs on top of other hardware-based platforms.

The Java platform has two components:


 The Java Virtual Machine (Java VM)
 The Java Application Programming Interface (Java API)
You’ve already been introduced to the Java VM. It’s the base for the Java platform and is
ported onto various hardware-based platforms.

The Java API is a large collection of ready-made software components that provide many
useful capabilities, such as graphical user interface (GUI) widgets. The Java API is grouped
into libraries of related classes and interfaces; these libraries are known as packages. The
next section, What Can Java Technology Do? Highlights what functionality some of the
packages in the Java API provide.
The following figure depicts a program that’s running on the Java platform. As the figure
shows, the Java API and the virtual machine insulate the program from the hardware.

Native code is code that after you compile it, the compiled code runs on a specific hardware
platform. As a platform-independent environment, the Java platform can be a bit slower than
native code. However, smart compilers, well-tuned interpreters, and just-in-time byte code
compilers can bring performance close to that of native code without threatening portability.

What Can Java Technology Do?


The most common types of programs written in the Java programming language are applets
and applications. If you’ve surfed the Web, you’re probably already familiar with applets. An
applet is a program that adheres to certain conventions that allow it to run within a Java-
enabled browser.

However, the Java programming language is not just for writing cute, entertaining applets for
the Web. The general-purpose, high-level Java programming language is also a powerful
software platform. Using the generous API, you can write many types of programs.
An application is a standalone program that runs directly on the Java platform. A special kind
of application known as a server serves and supports clients on a network. Examples of
servers are Web servers, proxy servers, mail servers, and print servers. Another specialized
program is a servlet. A servlet can almost be thought of as an applet that runs on the server
side. Java Servlets are a popular choice for building interactive web applications, replacing
the use of CGI scripts. Servlets are similar to applets in that they are runtime extensions of
applications. Instead of working in browsers, though, servlets run within Java Web servers,
configuring or tailoring the server.
How does the API support all these kinds of programs? It does so with packages of software
components that provides a wide range of functionality. Every full implementation of the
Java platform gives you the following features:
 The essentials: Objects, strings, threads, numbers, input and output, data
structures, system properties, date and time, and so on.
 Applets: The set of conventions used by applets.
 Networking: URLs, TCP (Transmission Control Protocol), UDP (User Data
gram Protocol) sockets, and IP (Internet Protocol) addresses.
 Internationalization: Help for writing programs that can be localized for
users worldwide. Programs can automatically adapt to specific locales and be
displayed in the appropriate language.
 Security: Both low level and high level, including electronic signatures,
public and private key management, access control, and certificates.
 Software components: Known as JavaBeansTM, can plug into existing
component architectures.
 Object serialization: Allows lightweight persistence and communication via
Remote Method Invocation (RMI).
 Java Database Connectivity (JDBCTM): Provides uniform access to a wide
range of relational databases.
The Java platform also has APIs for 2D and 3D graphics, accessibility, servers,
collaboration, telephony, speech, animation, and more. The following figure depicts
what is included in the Java 2 SDK.

How Will Java Technology Change My Life?

We can’t promise you fame, fortune, or even a job if you learn the Java programming
language. Still, it is likely to make your programs better and requires less effort than other
languages. We believe that Java technology will help you do the following:
 Get started quickly: Although the Java programming language is a powerful
object-oriented language, it’s easy to learn, especially for programmers
already familiar with C or C++.
 Write less code: Comparisons of program metrics (class counts, method
counts, and so on) suggest that a program written in the Java programming
language can be four times smaller than the same program in C++.
 Write better code: The Java programming language encourages good coding
practices, and its garbage collection helps you avoid memory leaks. Its object
orientation, its JavaBeans component architecture, and its wide-ranging, easily
extendible API let you reuse other people’s tested code and introduce fewer
bugs.
 Develop programs more quickly: Your development time may be as much as
twice as fast versus writing the same program in C++. Why? You write fewer
lines of code and it is a simpler programming language than C++.
 Avoid platform dependencies with 100% Pure Java: You can keep your
program portable by avoiding the use of libraries written in other languages.
The 100% Pure JavaTM Product Certification Program has a repository of
historical process manuals, white papers, brochures, and similar materials
online.
 Write once, run anywhere: Because 100% Pure Java programs are compiled
into machine-independent byte codes, they run consistently on any Java
platform.
 Distribute software more easily: You can upgrade applets easily from a
central server. Applets take advantage of the feature of allowing new classes
to be loaded “on the fly,” without recompiling the entire program.
ODBC
Microsoft Open Database Connectivity (ODBC) is a standard programming interface for
application developers and database systems providers. Before ODBC became a de facto
standard for Windows programs to interface with database systems, programmers had to use
proprietary languages for each database they wanted to connect to. Now, ODBC has made
the choice of the database system almost irrelevant from a coding perspective, which is as it
should be. Application developers have much more important things to worry about than the
syntax that is needed to port their program from one database to another when business
needs suddenly change.
Through the ODBC Administrator in Control Panel, you can specify the particular database
that is associated with a data source that an ODBC application program is written to use.
Think of an ODBC data source as a door with a name on it. Each door will lead you to a
particular database. For example, the data source named Sales Figures might be a SQL
Server database, whereas the Accounts Payable data source could refer to an Access
database. The physical database referred to by a data source can reside anywhere on the
LAN.
The ODBC system files are not installed on your system by Windows 95. Rather, they are
installed when you setup a separate database application, such as SQL Server Client or Visual
Basic 4.0. When the ODBC icon is installed in Control Panel, it uses a file called
ODBCINST.DLL. It is also possible to administer your ODBC data sources through a stand-
alone program called ODBCADM.EXE. There is a 16-bit and a 32-bit version of this
program and each maintains a separate list of ODBC data sources.

From a programming perspective, the beauty of ODBC is that the application can be written
to use the same set of function calls to interface with any data source, regardless of the
database vendor. The source code of the application doesn’t change whether it talks to
Oracle or SQL Server. We only mention these two as an example. There are ODBC drivers
available for several dozen popular database systems. Even Excel spreadsheets and plain
text files can be turned into data sources. The operating system uses the Registry information
written by ODBC Administrator to determine which low-level ODBC drivers are needed to
talk to the data source (such as the interface to Oracle or SQL Server). The loading of the
ODBC drivers is transparent to the ODBC application program. In a client/server
environment, the ODBC API even handles many of the network issues for the application
programmer.
The advantages of this scheme are so numerous that you are probably thinking there must be
some catch. The only disadvantage of ODBC is that it isn’t as efficient as talking directly to
the native database interface. ODBC has had many detractors make the charge that it is too
slow. Microsoft has always claimed that the critical factor in performance is the quality of
the driver software that is used. In our humble opinion, this is true. The availability of good
ODBC drivers has improved a great deal recently. And anyway, the criticism about
performance is somewhat analogous to those who said that compilers would never match the
speed of pure assembly language. Maybe not, but the compiler (or ODBC) gives you the
opportunity to write cleaner programs, which means you finish sooner. Meanwhile,
computers get faster every year.
JDBC
In an effort to set an independent database standard API for Java; Sun Microsystems
developed Java Database Connectivity, or JDBC. JDBC offers a generic SQL database
access mechanism that provides a consistent interface to a variety of RDBMSs. This
consistent interface is achieved through the use of “plug-in” database connectivity modules,
or drivers. If a database vendor wishes to have JDBC support, he or she must provide the
driver for each platform that the database and Java run on.
To gain a wider acceptance of JDBC, Sun based JDBC’s framework on ODBC. As you
discovered earlier in this chapter, ODBC has widespread support on a variety of platforms.
Basing JDBC on ODBC will allow vendors to bring JDBC drivers to market much faster
than developing a completely new connectivity solution.
JDBC was announced in March of 1996. It was released for a 90 day public review that
ended June 8, 1996. Because of user input, the final JDBC v1.0 specification was released
soon after.
The remainder of this section will cover enough information about JDBC for you to know
what it is about and how to use it effectively. This is by no means a complete overview of
JDBC. That would fill an entire book.

JDBC Goals
Few software packages are designed without goals in mind. JDBC is one that, because of its
many goals, drove the development of the API. These goals, in conjunction with early
reviewer feedback, have finalized the JDBC class library into a solid framework for building
database applications in Java.
The goals that were set for JDBC are important. They will give you some insight as to why
certain classes and functionalities behave the way they do. The eight design goals for JDBC
are as follows:

1. SQL Level API


The designers felt that their main goal was to define a SQL interface for Java. Although
not the lowest database interface level possible, it is at a low enough level for higher-level
tools and APIs to be created. Conversely, it is at a high enough level for application
programmers to use it confidently. Attaining this goal allows for future tool vendors to
“generate” JDBC code and to hide many of JDBC’s complexities from the end user.
2. SQL Conformance
SQL syntax varies as you move from database vendor to database vendor. In an effort to
support a wide variety of vendors, JDBC will allow any query statement to be passed
through it to the underlying database driver. This allows the connectivity module to
handle non-standard functionality in a manner that is suitable for its users.

3. JDBC must be implemental on top of common database interfaces


The JDBC SQL API must “sit” on top of other common SQL level APIs. This goal
allows JDBC to use existing ODBC level drivers by the use of a software interface.
This interface would translate JDBC calls to ODBC and vice versa.

4. Provide a Java interface that is consistent with the rest of the Java system
Because of Java’s acceptance in the user community thus far, the designers feel that

they should not stray from the current design of the core Java system.

5. Keep it simple
This goal probably appears in all software design goal listings. JDBC is no exception.
Sun felt that the design of JDBC should be very simple, allowing for only one method of
completing a task per mechanism. Allowing duplicate functionality only serves to confuse
the users of the API.

6. Use strong, static typing wherever possible


Strong typing allows for more error checking to be done at compile time; also, less
error appear at runtime.

7. Keep the common cases simple


Because more often than not, the usual SQL calls used by the programmer are simple
SELECT’s, INSERT’s, DELETE’s and UPDATE’s, these queries should be simple to
perform with JDBC. However, more complex SQL statements should also be possible.

Finally we decided to proceed the implementation using Java Networking.


And for dynamically updating the cache table we go for MS Access database.

Java ha two things: a programming language and a platform.


Java is a high-level programming language that is all of the following

Simple Architecture-neutral
Object-oriented Portable
Distributed High-performance
Interpreted multithreaded
Robust Dynamic
Secure

Java is also unusual in that each Java program is both compiled and interpreted.
With a compile you translate a Java program into an intermediate language called
Java byte codes the platform-independent code instruction is passed and run on the
computer.

Compilation happens just once; interpretation occurs each time the program is
executed. The figure illustrates how this works.

Java Program Interpreter

Compilers My Program

You can think of Java byte codes as the machine code instructions for the Java
Virtual Machine (Java VM). Every Java interpreter, whether it’s a Java
development tool or a Web browser that can run Java applets, is an implementation
of the Java VM. The Java VM can also be implemented in hardware.
Java byte codes help make “write once, run anywhere” possible. You can compile your Java
program into byte codes on my platform that has a Java compiler. The byte codes can then be
run any implementation of the Java VM. For example, the same Java program can run
Windows NT, Solaris, and Macintosh.
7. SYSTEM TESTING
The purpose of testing is to discover errors. Testing is the process of trying to discover every
conceivable fault or weakness in a work product. It provides a way to check the functionality
of components, sub assemblies, assemblies and/or a finished product It is the process of
exercising software with the intent of ensuring that the Software system meets its
requirements and user expectations and does not fail in an unacceptable manner. There are
various types of test. Each test type addresses a specific testing requirement.

TYPES OF TESTS
Unit testing
Unit testing involves the design of test cases that validate that the internal program logic is
functioning properly, and that program inputs produce valid outputs. All decision branches
and internal code flow should be validated. It is the testing of individual software units of the
application .it is done after the completion of an individual unit before integration. This is a
structural testing, that relies on knowledge of its construction and is invasive. Unit tests
perform basic tests at component level and test a specific business process, application,
and/or system configuration. Unit tests ensure that each unique path of a business process
performs accurately to the documented specifications and contains clearly defined inputs and
expected results.

Integration testing
Integration tests are designed to test integrated software components to determine if they
actually run as one program. Testing is event driven and is more concerned with the basic
outcome of screens or fields. Integration tests demonstrate that although the components were
individually satisfaction, as shown by successfully unit testing, the combination of
components is correct and consistent. Integration testing is specifically aimed at exposing
the problems that arise from the combination of components
Functional test

Functional tests provide systematic demonstrations that functions tested are available as
specified by the business and technical requirements, system documentation, and user
manuals.

Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.

Invalid Input : identified classes of invalid input must be rejected.

Functions : identified functions must be exercised.

Output : identified classes of application outputs must be exercised.

Systems/Procedures : interfacing systems or procedures must be invoked.

Organization and preparation of functional tests is focused on requirements, key functions, or


special test cases. In addition, systematic coverage pertaining to identify Business process
flows; data fields, predefined processes, and successive processes must be considered for
testing. Before functional testing is complete, additional tests are identified and the effective
value of current tests is determined.
System Test
System testing ensures that the entire integrated software system meets requirements. It tests
a configuration to ensure known and predictable results. An example of system testing is the
configuration oriented system integration test. System testing is based on process descriptions
and flows, emphasizing pre-driven process links and integration points.

White Box Testing

White Box Testing is a testing in which in which the software tester has knowledge of the inner
workings, structure and language of the software, or at least its purpose. It is purpose. It is used
to test areas that cannot be reached from a black box level.

Black Box Testing


Black Box Testing is testing the software without any knowledge of the inner workings,
structure or language of the module being tested. Black box tests, as most other kinds of tests,
must be written from a definitive source document, such as specification or requirements
document, such as specification or requirements document. It is a testing in which the
software under test is treated, as a black box .you cannot “see” into it. The test provides
inputs and responds to outputs without considering how the software works.

7.1 Unit Testing:

Unit testing is usually conducted as part of a combined code and unit test phase of the
software lifecycle, although it is not uncommon for coding and unit testing to be conducted as
two distinct phases.

Test strategy and approach


Field testing will be performed manually and functional tests will be written in detail.

Test objectives
 All field entries must work properly.
 Pages must be activated from the identified link.
 The entry screen, messages and responses must not be delayed.

Features to be tested
 Verify that the entries are of the correct format
 No duplicate entries should be allowed
 All links should take the user to the correct page.

7.2 Integration Testing


Software integration testing is the incremental integration testing of two or more integrated
software components on a single platform to produce failures caused by interface defects.
The task of the integration test is to check that components or software applications, e.g.
components in a software system or – one step up – software applications at the company
level – interact without error.

Test Results: All the test cases mentioned above passed successfully. No defects
encountered.

7.3 Acceptance Testing


User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional
requirements.

Test Results: All the test cases mentioned above passed successfully. No defects
encountered.

7.4 TestCases
Test cases can be divided in to two types. First one is Positive test cases and second one is
negative test cases. In positive test cases are conducted by the developer intention is to get the
output. In negative test cases are conducted by the developer intention is to don’t get the
output.

+VE TEST CASES

S .No Test case Actual value Expected Result


Description value

1 Create new user Enter the Update personal True


registration process personal info info and address
and address info. info in to oracle
database
successfully

2 Enter the username and Verification of Login True


password login details. Successfully

3 Upload file into single Enter all fields Web data True
cloud , multiple cloud uploaded
successfully

4 Enter keyword query Enter valid Display relevant True


submission query records based
keyword query

+VE TEST CASES

S .No Test case Actual value Expected Result


Description value

1 Create the new user Enter the Personal info False


registration process personal info and address info
and address info. its not update
into database
successfully.

2 Enter the username and Verification of Login failed False


password login details.

3 Upload information Enter all fields Web data is not False


create
successfully.

4 Enter keyword query Enter valid Relevant records False


submission query are not present
in database
8 Conclusion and Future Enhancements

Maximize the profit of service providers, this paper has proposed a novel Double-Quality-
Guaranteed (DQG) renting scheme for service providers. This scheme combines short-term
renting with long-term renting, which can reduce the resource waste greatly and adapt to the
dynamical demand of computing capacity. An M/M/m+D queueing model is build for our
multiserver system with varying system size. And then, an optimal configuration problem of
profit maximization is formulated in which many factors are taken into considerations, such
as the market demand, the workload of requests, the server-level agreement, the rental cost of
servers, the cost of energy consumption, and so forth. The optimal solutions are solved for
two different situations, which are the ideal optimal solutions and the actual optimal
solutions. In addition, a series of calculations are conducted to compare the profit obtained by
the DQG renting scheme with the Single-Quality-Unguaranteed (SQU) renting scheme. The
results show that our scheme outperforms the SQU scheme in terms of both of service quality
and profit.
Samplecode

package databaseconnection;

import java.sql.*;

public class databasecon

static Connection con;

public static Connection getconnection()

try

Class.forName("com.mysql.jdbc.Driver");

con =
DriverManager.getConnection("jdbc:mysql://localhost:3306/profit","root","root");

catch(Exception e)

System.out.println("class error");
}

return con;

<%@page
import="com.oreilly.servlet.*,java.sql.*,java.lang.*,java.text.SimpleDateFormat,java.util.*,ja
va.io.*,javax.servlet.*, javax.servlet.http.*" %>

<%@ page import="java.sql.*,databaseconnection.*" errorPage="" %>


<html>
<head>
</head>
<body>
<%
ArrayList list = new ArrayList();
ServletContext context = getServletContext();

String dirName =context.getRealPath("");


String paramname=null;
String file=null;
String
mobileno=null,country=null,a=null,c=null,d=null,ee=null,fg=null,photo=null,fname=null,lna
me=null,user_type=null,email=null,username=null,password=null;
String bin = "";
FileInputStream fs=null;
FileInputStream fss=null;

File file1 = null;


File file2 = null;

try {

MultipartRequest multi = new MultipartRequest(request, dirName, 10 * 1024


* 1024); // 10MB

Enumeration params = multi.getParameterNames();


while (params.hasMoreElements())
{
paramname = (String) params.nextElement();

if(paramname.equalsIgnoreCase("fname"))
{
fname=multi.getParameter(paramname);
}
if(paramname.equalsIgnoreCase("lname"))
{
lname=multi.getParameter(paramname);
}

if(paramname.equalsIgnoreCase("photo"))
{
photo=multi.getParameter(paramname);
}

if(paramname.equalsIgnoreCase("email"))
{
email=multi.getParameter(paramname);
}

if(paramname.equalsIgnoreCase("username"))
{
username=multi.getParameter(paramname);
session.setAttribute("username",username);
System.out.println("username in register page
is"+username);
}

if(paramname.equalsIgnoreCase("password"))
{
password=multi.getParameter(paramname);
System.out.println("password in register page
is"+password);

}
if(paramname.equalsIgnoreCase("mobileno"))
{
mobileno=multi.getParameter(paramname);
}
if(paramname.equalsIgnoreCase("country"))
{
country=multi.getParameter(paramname);
}

int f = 0;
Enumeration files = multi.getFileNames();
while (files.hasMoreElements())
{
paramname = (String) files.nextElement();
if(paramname.equals("d1"))
{
paramname = null;
}

if(paramname != null)
{
f = 1;
photo = multi.getFilesystemName(paramname);
String fPath = context.getRealPath(""+photo);
file1 = new File(fPath);
fs = new FileInputStream(file1);
list.add(fs);

String ss=fPath;
FileInputStream fis = new FileInputStream(ss);
StringBuffer sb1=new StringBuffer();
int i = 0;
while ((i = fis.read()) != -1) {
if (i != -1) {
//System.out.println(i);
String hex = Integer.toHexString(i);
// session.put("hex",hex);
sb1.append(hex);
// sb1.append(",");

}
}

FileInputStream fs1 = null;

//name=dirName+"\\Gallery\\"+image;
int lyke=0;
//String as="0";

Connection con = databasecon.getconnection();


PreparedStatement ps=con.prepareStatement("insert into
customer(fname,lname,email,username,password,photo,mobileno,country)
values(?,?,?,?,?,?,?,?)");

ps.setString(1,fname);
ps.setString(2,lname);
ps.setString(3,email);
ps.setString(4,username);
ps.setString(5,password);

ps.setBinaryStream(6, (InputStream)fs, (int)(file1.length()));

fs1 = (FileInputStream)list.get(0);
ps.setBinaryStream(6,fs1,fs1.available());
ps.setString(7,mobileno);
ps.setString(8,country);

int x=ps.executeUpdate();

response.sendRedirect("cloudcustomer.jsp?success");
}
catch (Exception e)
{
out.println(e.getMessage());
}
%>
</body>
</html>
ScreenShots

You might also like