This action might not be possible to undo. Are you sure you want to continue?
is the packaging of computing resources, such as computation, storage and services, as a metered service. This model has the advantage of a low or no initial cost to acquire computer resources; instead, computational resources are essentially rented. This repackaging of computing services became the foundation of the shift to "On Demand" computing, Software as a Service and Cloud Computing models that further propagated the idea of computing, application and network as a service. There was some initial skepticism about such a significant shift. However, the new model of computing caught and eventually became mainstream. IBM, HP and Microsoft were early leaders in the new field of Utility Computing with their business units and researchers working on the architecture, payment and development challenges of the new
computing model. Google, Amazon and others started to take the lead in 2008, as they established their own utility services for computing, storage and applications. Utility Computing can support grid computing which has the characteristic of very large computations or a sudden peaks in demand which are supported via a large number of computers. "Utility computing" has usually envisioned some form of virtualization so that the amount of storage or computing power available is considerably larger than that of a single time-sharing computer. Multiple servers are used on the "back end" to make this possible. These might be a dedicated computer cluster specifically built for the purpose of being rented out, or even an under-utilized supercomputer. The technique of running a single calculation on multiple computers is known as distributed computing. The term "grid computing" is often used to describe a particular form of distributed computing, where the supporting nodes are geographically distributed or cross administrative domains. To provide utility computing services, a company can "bundle" the resources of members of the public for sale, who might be paid with a portion of the revenue from clients. One model, common among volunteer computing applications, is for a central server to dispense tasks to participating nodes, on the behest of approved end-users (in the commercial case, the paying customers). Another model, sometimes called the Virtual Organization (VO), or as they go idle. is more decentralized, with organizations buying and selling computing resources as needed
etc. In 1998. InsynQ .The definition of "utility computing" is sometimes extended to specialized tasks. by making computers affordable to almost all companies. There is space in the market for specific . utilization. incorporating multiple utilities to form a software stack. data centers became filled with thousands of servers. a Web search building tool for which the underlying power is utility computing. Inc. As Intel and AMD increased the power of PC architecture servers with each new generation of processor. security. launched [on-demand] applications and desktop hosting services in 1997 using HP equipment. The advent of mini computers changed this business model. Alexa launched Alexa Web Search Platform. Sun announced the Sun Cloud service to consumers in 2000. HISTORY: IBM and other mainframe providers conducted this kind of business in the following two decades. To facilitate this business model. CA. assigning former Bell Labs computer scientists to begin work on a computing power plant. mainframe operating systems evolved to include process control facilities. Services such as "IP billing-on-tap" were marketed. offering computing power and database storage to banks and other large organizations from their world wide data centers. In the late 90's utility computing re-surfaced. HP introduced the Utility Data Center in 2001. often referred to as timesharing. Alexa charges users for storage. HP set up the Utility Computing Division in Mountain View. such as web services. In December 2005. and user metering.
These services allow the operation of general purpose computing applications. image rendering and processing but also general-purpose business applications. vertical industries such as financial services. Utility computing merely means "Pay and Use". and maintain the environment without disruption. The Database Utility and File Serving Utility enable IT organizations to independently add servers or storage as needed. though Windows and Solaris are supported. and content serving. Both are based on Xen virtualization software and the most commonly used operating system on the virtual computers is Linux. SaaS. PolyServe Inc. In spring 2006 3tera announced its AppLogic service and later that summer Amazon launched Amazon EC2 (Elastic Compute Cloud). with regards to computing power. Utility Computing Today: How does utility computing play out in today’s storage and networking marketplace? Depending on who you talk to. Common uses include web application. retask workloads to different hardware. For example. as well as workload optimized solutions specifically tuned for bulk storage. seismic processing.industries and applications as well as other niche applications powered by utility computing. utility . offers a clustered file system based on commodity server and storage hardware that creates highly available utility computing environments for mission-critical applications including Oracle and Microsoft SQL Server databases. highperformance computing.
while tracking usage for later chargeback. “The software licensing models in particular are currently the barrier to utility pricing models. The utility infrastructure must be able to automatically provision and deliver resources on demand. Utility networks exist today.It’s about establishing a tighter. Ideally. But the real holdup for utility computing is that application providers have yet to move en masse toward UC-ready licensing models. . which explains why not every company is jumping on the utility bandwagon (basing your company’s IT life on a bunch of relatively untried tools is only for the very brave or the foolhardy). Such a level of flexibility and tracking requires management tools that are currently in their infancy. the applications that run on them. But software vendors are still predominantly selling their products on a per-seat or per-CPU basis.computing might be an IT management approach.” much as we do with electricity and water. storage pooling. HP’s Mark Linesch. utility computing pricing models would allow customers to pay “by the sip.. senior vice president at Meta Group. VP of adaptive enterprise programs. but true utility computing requires close coordination between hardware components.” That is because utility computing lives or dies on the integration of its parts. more dynamic link between the business and its IT infrastructure. a business strategy or a software/hardware tool. and the data management tools that handle provisioning. regardless of how much or how little an individual seat or CPU is utilized.. put it as well as anybody: “It’s not about a big new technology. and a myriad of tasks that require wide-scale automation across a utility network.” says Corey Ferengul.
Leverages infrastructure costs to meet changing business requirements. utility computing is more a strategic approach than a specific application or suite of applications. Avoids network downtime and lag by immediately provisioning for changing needs. utility computing brings some important benefits with it. Utility computing also needs scalable. This will happen faster when going to a reliable SP model. These include: Simplified administration. Reduces time-consuming and complex administration overhead. and should not depend on highly proprietary hardware or software to work.Like ILM. Enables administrators to manage fast growth and peaks-and-valleys capacity and processing demands. and heterogeneous computing resources. Capacity to meet business needs. Automated provisioning based on need yields excellent ROI on internal resources. The idea behind utility computing is to provide unlimited computing power and storage capacity that can be used and reallocated for any application — and billed on a pay-per-use basis. Ideally. Cost-effective. but internal deployment will yield the same benefits. standardized. serves business growth. .
the business wants IT to help minimize the costs of providing business services. a plethora of operating systems.Basic requirements for successful utility computing Automating costing procedures for computing resources.CIOs are increasingly unsympathetic. Billing or chargeback information should be driven by the capacity required to support business processes. IT departments have to resort to painful manual techniques to deal with impossibly complex server farms. Note that this sounds good on paper but can lead to heavy political infighting: Many business units hate chargeback because it adds costs to their bottom line. But in the face of spiraling IT costs . Without automated provisioning. Automated provisioning to meet the business unit’s scaled-up or scaled-down needs.all of which are coming out of their budget . multiplying storage systems and expensive management software. . the easier this critical piece of utility computing will become. As a result of properly aligning infrastructure with business processes. The better automated provisioning technology gets.
A case in point is the recent denial-of-service attack that Sun suffered on the very first day that the company allowed users to buy Internet access to its muchhyped.Virtualization. and much delayed. It should also know how to apply various settings – like user authentication and security policies – to various types of data and originating applications. And yes. . recovery. Be able to apply them to specific business processes. Provisioning This is the big . storage networking devices – hosts. public utility grid. etc. Virtualization and automatic provisioning will have to work across operating systems and switches. Virtualization actually ranges from a visual screen where administrators can make changes to their storage assignments. If you thought security was tough in a regular network environment. security settings and storage definitions. and in multi-vendor environments. Discovery Automatically identify . Other types of automation . this is a tall order. one. Security. storage. try a utility computing network that is serving hundreds or thousands of customers. up to automatic provisioning where the software does it for you. environments – like system configurations. Configuration Automatically implements network settings across . Automation should work to allocate computing power and storage room to shifting workloads. Automate problem detection and Flexible systems. Selfsubsequent correction orhealing. Virtualization is an underlying technology that makes it possible to quickly ready storage for incoming applications.
particularly the lack of a chargeback model in SMB. building. . utility computing might not be purely a matter for the enterprise. SOA (Service-Oriented Architecture) is a computing architecture that undergirds the act of delivering IT as a service.) There are differences between SMB and the enterprise utility computing models. and managing distributed computing environments. Grid computing is a form of distributed computing where resources are often spread across different physical locations and domains. According to strategic consultancy THINK strategies. works well best with standards-based computing resources. and efficiently enables utility computing infrastructure development. Utility Computing and SMB : Amazingly enough.Grid computing and SOA. At this point. Grid computing is a foundation technology for models like utility computing. SMB commonly lacks internal IT skills to optimize their network infrastructure. IT can be as complex for SMB to manage as for the enterprise. and can benefit from a solidly hosted. SMB’s utility computing SPs depend primarily on network and performance management tools. and software diagnostic tools to serve their SMB clients. most SMBs adopting utility computing will outsource to an SP. where computing resources are pay-per-use commodities. SOA can be used for designing. software distribution tools. reliable and high-performance model. (Internally deploying a utility computing infrastructure runs into exactly the same challenges driving SMB to utility computing in the first place.
and software performance Software distribution tools to automatically update operating systems and applications from a central console Software diagnostic tools to perform system and software analyses.Network Management tools to proactively monitor hardware states Performance Management tools to effectively measure network. security. and other foundational technologies. Utility computing provides the enterprise with a charge-back function to support this business model. Over time. utility computing leverages computing infrastructure costs and reduces management overhead. Utility computing is ultimately about how companies can make better use of all their computing resources. where they turn over network management to an SP. and self-healing techniques Predictions I expect utility computing to dovetail with developments in grid computing. Utility computing is a model in which each IT resource is treated as a unit of capacity that is delivered when and where it is needed. system. SOA. A Definition Quick . storage. By delivering fast and intelligent access to network resources. SMB will increasingly turn to its own brand of utility computing. databases and applications will increasingly be made available for customers to access on demand over networks that appear as one large virtual computing system. automated provisioning and discovery.
The solutions simplify management. build.Resources are shared across applications. Physical servers can run different operating systems and applications concurrently in isolated virtual machines and can host virtualized high-speed network connections between virtual servers. This technology reduces the number of physical servers and associated hardware components you need because virtual servers share network and storage devices. . Modeling of IT infrastructure. When demand subsides. To the environment. increase efficiency and reduce costs by providing: Automatic discovery. When an application needs more resources -perhaps because of a component failure or a spike in demand -allocation occurs immediately. Real-time capacity management. resources are returned to the shared pools. Availability and performance monitoring and management. Data center optimization solutions help you plan. Technologies that Come into Play Server virtualization and systems-based automation are two key technologies for enabling utility computing. virtual servers look just like separate physical servers. operate and manage your utility computing infrastructure. Servers and their associated resources are aggregated into pools and allocated to applications as needed. Server virtualization is the running of multiple server images -called virtual servers (or machines) -on a single physical host server.
their configurations and their users. You interact with the model and vary parameters until you understand: . and Optimization. In step 2. automated discovery must run on a regular basis to keep the CMDB up to date. Analysis and planning (modeling). you model the target environment to create a workload or application perspective of resource utilization. and A single source of resource reference across all IT disciplines.Automatic provisioning of resources. update and replace components. The first step is to discover all assets in your IT infrastructure and populate a configuration management database (CMDB) with information about those assets. analysis and planning. Modeling takes the guesswork out of physical server sizing and helps you achieve maximum resource utilization. Systems-based solutions come into play here by populating the CMDB and maintaining its accuracy. Information includes IT resources. Implementing Utility Computing Implementing a utility computing environment involves four steps: Discovery. This information helps you determine the relationships among the resources and the services they support. Because the infrastructure changes constantly as you add. Implementation.
It's a good idea to make the move incrementally. you need to build a Definitive Software Library (DSL) that defines specific software configurations. To minimize risk. How future growth will affect capacity requirements. repeatability and scalability of the entire project. The right mix of virtual machines on each physical host server. Automation during this phase helps bring accuracy. . such as versions and patch levels required for all the hardware configurations specified in the resource plan. It's also a good idea to work in a test environment first. and that changes are implemented as planned and authorized.How many physical systems you can move onto a single physical host server as virtual machines. starting with workloads that have the greatest potential for improvement. To ensure accuracy and compliance of the resulting total configuration. You'll be making a number of potentially complex changes to the overall IT environment. and The current resource demand cycles relative to the business services. The output of this step is a comprehensive hardware resource plan -knowledge of the specific hardware configurations needed to sustain the virtual environment. How big the physical host server must be. only authorized changes are initiated. Step 3 involves moving physical server workloads to virtual servers. you need to encapsulate all changes within a broader change and configuration management process that ensures that only planned changes are authorized.
This approach ensures that only planned and authorized allocation decisions are made in real time. This orchestration is based on resource allocation policies that you have established.you implement the utility computing environment. Virtualization and utility computing position you to meet the demands of business users for a continual stream of new and more advanced business services. so it is critical to establish the automation policies and subject them to comprehensive. BMC Software offers solutions that can proactively and effectively manage virtual server environments while minimizing the risks in deploying virtual servers .In the fourth step . which involves automating the sharing and provisioning of physical and virtual resources and applications. Traditional change management relies on manual steps and change approval boards. Virtualization technologies help you put an end to overprovisioning while still delivering high availability and fast performance. they allow you to take full advantage of utility computing . closed-loop change and configuration management processes prior to implementing real-time resource allocation strategies.so you can optimally match resource capacity with business requirements through real-time capacity management. Moreover.optimization . To ensure service levels. it's important to provide sufficient time to act when implementing real-time resource allocation. Bottom line: You can adapt to changing business requirements while continuing to deliver highquality business services at the lowest possible cost.
the client relies on another party to provide these services. depending on how the provider structures fees. problems can arise with computing software. Cost can be either an advantage or disadvantage. Most of the cost for maintenance becomes the responsibility of the provider. Each department might depend on different software suites. allowing the client to concentrate on other tasks. Closely related to convenience is compatibility. The files used by employees in one part of a company might be incompatible with the software used by employees in another part. not the client. However.For most clients. which is less expensive and can be easier to maintain. there's no need for the client to look elsewhere. In a large company with many departments. Utility computing gives companies the option to subscribe to a single service and use the same suite of software throughout the entire client organization. As long as the utility computing company offers the client the services it needs to do business. software and licenses needed to do business. The client doesn't have to buy all the hardware. Instead. the biggest advantage of utility computing is convenience. The burden of maintaining and administering the system falls to the utility computing company. The client can choose to rely on simplified hardware. If the client is a small business Utility Computing Disadvantages: Advantages and . Using a utility computing company for services can be less expensive than running computer operations in-house. in some cases what the client needs and what the provider offers aren't in alignment.
clients could get cut off from the services for which they're paying. it's not hard for an intruder to find ways to invade a utility computing company's system. If a utility computing company is in financial trouble or has frequent equipment problems. you're ahead of the game. Clients might hesitate to hand over duties to a smaller company if it could mean losing data and other capabilities should the business suffer. A hacker might want to access services without paying for them or snoop around and investigate client files. If a utility computing company goes out of business. Much of the responsibility of keeping the system safe falls to the provider. Why pay a high service charge for something you don't need? Another potential disadvantage is reliability. It's hard to sell a service to a client if the client has never heard of it. but some of it also relies on the client's practices.and the provider offers access to expensive supercomputers at a hefty fee. This spells trouble for both the provider and the client. As utility computing companies offer more comprehensive and sophisticated services. Awareness of utility computing isn't very widespread. If a company doesn't educate its workforce on proper access procedures. One challenge facing utility computing services is educating consumers about the service. Utility computing systems can also be attractive targets for hackers. Now that you've read this article. there's a good chance the client will choose to handle its own computing needs. its clients could fall victim to the same fate. we may see more corporations choosing to .
It performs a single task or a number of small tasks.System Profilers .Disk defragmenters . it's possible that computers in data centers miles from your home or office will handle all your computational needs for you. A Utility software is actually a kind of computer software which is designed to help in management and tuning of computer hardware. The examples of Utility software are as follows: .Encryption utilities.Application launchers .Virus scanners .network managers . operating system and application software. . Eventually.use their services.
or network resources across dynamic and geographically dispersed multi-institutional virtual organizations. The proliferation of largely unused computing resources (especially desktop computers. The widespread availability of fast. 4. and neither must be on the user’s home (login) computer. universal network connections (the Internet). storage. High performance computers (formerly called supercomputers) are very expensive to buy and maintain. applications. of which 152 million were sold in 2003). A user of Grid computing does not need to have the data and the software on the same computer. 3. Their greatly increased CPU speed in recent years (now >3 GHz). 2. Background of Grid Computing : The idea of Grid computing resulted from the confluence of three developments: – The proliferation of largely unused computing resources (especially desktop computers) – Their greatly increased cpu speed in recent years – The widespread availability of fast. Much of the enhancement of computing power recently has come through the application of multiple CPUs to a .Grid Computing : is a form of distributed computing that involves coordinating and controlled sharing of diverse computing. data. Need for Grid Computing: 1. 5. universal network connections (the Internet).
Custom solutions (early 90s) “Metacomputing” explorative work Applications built directly on Internet protocols (TCP/IP) Limited functionality. 2. Open Grid Services Architecture (OGSA) (2002) Community standard with multiple implementations Globus GT3 implementation Service-oriented architecture based on XML Web services. .problem (e. Many computing tasks relegated to these (especially massively parallel) computers could be performed by a “divide and conquer” strategy using many more. scalability. 6.g.. and robustness. NCSC had a 720 processor IBM parallel computer). Evolution of Grid : 1. processors as are available on a Grid. although slower. security.
Although it is still in development. resources-. with the participation of IBM and NASA. . Codine / Sun Grid Engine. It is based among others on the experience of other projects related to Grid-Technology such as Condor. Legion. This includes areas of security and data-. It is therefore shortly described as a reference implementation of the OGSA specification. The GT4 offers all necessary components for the implementation of Grid systems. and administrative tasks.The Globus Toolkit 4 : The Globus Toolkit of the Globus-Alliance is a Middleware for Grid Systems. The Globus project arose from a collaboration between the University of Chicago and the University of Southern California. almost all major grid projects project are based on this toolkit. In addition. so even Instant-Grid does. Nimrod and Unicore. it provides interfaces and libraries for popular programming environments.
No need to buy large six figure SMP servers for applications that can be split up and farmed out to smaller commodity type servers. This . 5. Results can then be concatenated and analyzed upon job(s) completion. 2. Since there are so many resources some can be taken offline while leaving enough for work to continue. Much more efficient use of idle resources. Policies can be in place that allow jobs to only go to servers that are lightly loaded or have the appropriate amount of memory/cpu characteristics for the particular application. A client will reside on each server which send information back to the master telling it what type of availability or resources it has to complete incoming jobs. Grid environments are much more modular and don't have single points of failure. Jobs can automatically restart if a failure occurs. They can be removed just as easily on the fly. This modular environment really scales well. Policies can be managed by the grid software. Need more compute resources? Just plug them in by installing grid client on additional desktops or servers. Many of these resources sit idle especially during off business hours. The software is really the brains behind the grid. 6. If one of the servers/desktops within the grid fail there are plenty of other resources able to pick the load. 4. 3. This model scales very well. Upgrading can be done on the fly without scheduling downtime. Jobs can be farmed out to idle servers or even idle desktops.Advantages : 1.
cssh. Opsware. Grid environments are extremely well suited to run jobs that can be split into smaller chunks and run concurrently on many nodes. Tools exist to manage such challenges include systemimager. You may need to have a fast interconnect between compute resources (gigabit ethernet at a minimum). Infiband for MPI intense applications 3. 4. Bladelogic. Good tools for managing change and keeping configurations in sync with each other can be challenging in large environments. among others. Vendors are starting to be more flexible with environment like this. 7. Jobs can be executed in parallel speeding performance. Licensing across many servers may make it prohibitive for some apps. Using things like MPI will allow message passing to occur among compute resources. . 2. cfengine. Some applications may need to be tweaked to take full advantage of the new model. 5. For memory hungry applications that can't take advantage of MPI you may be forced to run on a large SMP. pdsh. Disadvantages : 1.way upgrades can be cascaded as to not effect ongoing projects. Grid environments include many smaller servers across various administrative domains.
cheminformatics..6. The benefits for all groups need to be clearly articulated and policies developed that keeps everyone happy. (easier said than done.. and financial applications. Many groups are reluctant with sharing resources even if it benefits everyone involved. I believe the biggest barrier right now is education. . Political challenges associated with sharing resources (especially across different admin domains). With the advantages listed above you'll start to see much larger adoption of Grids which should benefit everyone involved.) Areas that already are taking good advantage of grid computing include bioinformatics. oil & drilling.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue listening from where you left off, or restart the preview.