Professional Documents
Culture Documents
Brought to you by
1. Intel testing and analysis. 2. Internal Intel research. 3. Dell vs. Sun Servers: TCO Comparison 2 Processor Dell and Sun Servers; Principled Technologies, commissioned by Dell; March 2010 4 Dell PowerEdge R910 and R810 Testing Performance Comparisons and TCO Analysis vs. Sun SPARC Enterprise TS440 and TS240 Servers; Principled Technologies, commissioned by Dell; March 2010
UBM TECHWEB WHiTE papEr | Data Center Modernization: 8 Best Practices for Unix to Linux Migration
licensing costs for proprietary Unix versions are significantly more expensive than Linux. In addition, a proprietary Unix infrastructure requires special versions of many IT infrastructure components, from storage to software applications, which typically cost more than standard versions. Opting for proprietary servers also locks organizations into future use of those same systems and associated products, reducing flexibility in running the IT infrastructure and future business needs. Best-of-Breed Applications Open systems provide an additional type of flexibility over proprietary ones. More third-party application and tool vendors create products for industry-standard platforms than for proprietary ones. Companies using x86 servers running Linux will be more likely to find the software they want and need available for their platforms. With a greater variety of software available, companies can readily assemble whatever best-of-breed applications they need. IT departments become more agile as they have additional options to support their companies strategies. Without vendor lockin, an organization can undertake technology planning on its terms, not the vendors. Continually Improving Performance Proprietary Unix supporters often claim that x86 servers with Linux lack the necessary power to run enterprise applications. However, that is untrue for modern industry-standard servers. An x86 server with the right processors and Linux creates a platform that is powerful, scalable, and cost effective. For example, independent 4-processor server comparisons show that the Dell PowerEdge R910 has an 89.8 percent performance advantage over a Sun SPARC Enterprise T5440 server and a 121.1 percent performance advantage over a Sun SPARC Enterprise M5000 server.5 More power per server means a greater concentration of computing resources available to a business and, as a result, greater server utilization. Additionally, open-standard x86 technology progresses more rapidly than proprietary, RISC-based architectures. The widening performance gap means an increasingly greater ability to reduce data center floor space and power consumption while increasing overall performance of the infrastructure. Reliability, Availability, and Serviceability (RAS) Industry-standard servers often have advanced RAS features that may not be available on proprietary Unix systems. Modern features that address
proceSSor power
Next-generation x86 servers are a far cry from their predecessors. With advanced processors such as intels Xeon 7500, they offer double the reliability, availability, and serviceability features of even the previous generation. New features in the Xeon 7500 include: Such features as Single Device DraM Correction (SDDC), Memory Mirroring, and MCa recovery support redundancy and failover for key system components and help failing data connections. Memory thermal throttling, memory demand and patrol scrubbing, and a corrupt data containment mode reduce circuit-level errors and limit the impact of such errors. MCa error logging with OS predictive failure analysis, memory board hot swap replacement, and electronically isolated partitioning help predict failures before they occur and allow preemptive replacement of failing components. For details, go to http://intel.com/itcenter/ topics/missioncritical.
the evolving reality of IT computing are critical for RAS. One example is the growing importance of memory recovery. As memory usage accelerates, DIMM error rates run as high as 8 percent a year.6 The Intel Xeon processor E7 family includes more than 20 new reliability, availability, and serviceability features that enable levels of data integrity and system resilience never before seen on industry-standard servers. In fact, the Xeon 7500 processors have more than twice the RAS features of Xeon 5600 processor series. With their modern RAS capabilities, industry-standard servers7 protect workflow and technology investment in ways that proprietary Unix systems do not. Improved Scalability Industry-standard servers running Linux are extremely scalable. The Dell PowerEdge R910, for example, can use two or four Xeon processors, which support multi-socket system designs without requiring third-party node controllers. This level of scalability reduces excess capital investment. Rather than over-purchase hardware, a company can invest knowing that it can expand server capacity in the future, without requiring additional floor space. More Flexible Management Organizations that use proprietary Unix systems
5.Dell vs. Sun Servers: R910 Performance Comparison SPECfp_rate_base2006; Principled Technologies, commissioned by Dell; March 2010 6.DRAM Errors in the Wild: A Large-Scale Field Study; Bianca Schroeder, Eduardo Pinheiro, Wolf-Dietrich Weber; SIGMETRICS; 2009. 7.A Catalyst for Mission-Critical Transformation; Intel; 2010
UBM TECHWEB WHiTE papEr | Data Center Modernization: 8 Best Practices for Unix to Linux Migration
generally face limited choices in management tools, and available tools typically arent compatible across equipment from other vendors. With industry-standard servers, however, an IT department needs only one set of tools and training, which reduces costs. The uniformity of administration platform also means an IT department can more effectively use its staff. System expertise transfers to virtually any part of the company, simplifying human resource planning.
Fundamental Improvement of IT Processes Proprietary Unix supporters also question whether an open x86 infrastructure model is really more cost effective, especially when considering design, implementation, and support costs. They argue that a strategy of long-term investment in platforms and applications that are sized and deployed for specific workloads is more cost-effective than an open methodology that an organization continuously renews and updates without adverse impact to its dayto-day business. Studies have
a given infrastructure may have a complex chain of dependencies. For example, applications may depend on specific middleware products that run on particular servers. iT must accurately document the dependencies to understand in what order the migration must happen, lest a miscalculation derail the migration process.
any company undertaking a migration needs the proper tools to document the current architecture, design the new one, establish and control the migration process, and facilitate communication among all involved. The time to do that is as early in the project as possible.
linux is certainly capable of supporting mission-critical processes for global corporations. But a properly designed linux strategy must be in place before migration. a common mistake is to translate old proprietary systems into a linux setting. That approach just reproduces the old problems, albeit on more capable and less expensive hardware, reducing the manageability advantage a migration is intended to create. Companies must develop a standard linux image for their new servers so that the administrative processes and tools will work with all of the new system configurations.
Incomplete Migration
Much of the rOi in a migration comes from actually shifting to the new servers and shutting down the old ones, along with their higher energy costs and maintenance. To the degree that migration is slowed or left incomplete, a company might find that the cost benefits it sought disappear.
Operational Readiness
Just as a migration plan must identify hidden application-server dependencies and plan accordingly, it must also anticipate the impact on operations and software assurance processes. The department will need to make appropriate modifications to ensure that ordinary work continues as needed.
No matter how thorough the pre-migration analysis and planning, there is always the possibility that something unanticipated causes problems and reduces benefits. To uncover such issues requires post-migration attention. at the very least, a company should undertake a review six months after the official completion of migration.
UBM TECHWEB WHiTE papEr | Data Center Modernization: 8 Best Practices for Unix to Linux Migration
shown that open system deployment costs are actually less than 75 percent of traditional methodologies cost, including acquisition, transition, and servicing. Not only is this new model cheaper, but it also provides additional value through the ability to rapidly adapt infrastructure to address emerging innovations, markets, or technologies without pain of migration, risk to critical operations, and, ultimately, the high cost of change. A strategy based on open standards, virtualized environments, and common automation best positions an IT organization to direct more of its budget into innovation, thereby providing a strategic competitive advantage for the parent organization. There are many good reasons for a company to switch from proprietary Unix to industrystandard Linux servers. Some areas are easier to migrate than others. However, barring unusual circumstances, a company can usually migrate its systems within a few weeks, with appropriate inhouse expertise or outside consulting help.
Best practices 5
1. Standardize, Simplify, and Automate Unix-to-Linux migration is one aspect of a threepronged data center transformation strategy: standardize, simplify, and automate. Standardizing the IT infrastructure on open systems makes it possible to develop a portfolio of hardware and software that provide implementation predictability. However, standardization alone isnt enough. An inefficiently designed and implemented architecture remains a problem, even if built with standardized technology choices. In its 2009 Q4 global ERP consolidation survey, Forrester Research found that 12 percent of the companies interviewed had from five to nine global instances of their ERP packages. An additional 14 percent had 10 or more instances, and a fifth of respondents didnt know the number.8 But once an organization has standardized on technology, it can design the post-migration infrastructure with simplification in mind. Adding automation to the standardized and simplified environment makes system administration and management even more efficient. 2. Enlist Support from the Top Migration is a difficult and complex undertaking that requires substantial resources, corporate focus, and interdepartmental cooperation. It only works if there is sufficient support from upper management. Someone must make the necessary
resources available and set a clear path for escalation of departmental conflicts. 3. Rationalize Workloads Servers are the visible manifestation of a companys infrastructure. Workloads are the virtual one. By simplifying workloads before migration, IT organizations can reduce what must be supported in the new infrastructure, which will, in turn, require fewer resources to run and administer.
8. The State of ERP 2009: Market Forces Drive Specialization, Consolidation, And Innovation; Hamerman, Paul D., November, 2009.
UBM TECHWEB WHiTE papEr | Data Center Modernization: 8 Best Practices for Unix to Linux Migration
4. Planning and Project Governance are Key Focusing on proper project control from the beginning is a key to success. The first step is to thoroughly plan the complex set of tasks and then, keeping in mind such issues as hidden dependencies, prioritize the tasks. Once the plans are ready and the new infrastructure designed, the migration to new systems should occur as quickly as possible. Remember, the longer old systems remain up and running, the lower and slower the ROI. With the systems shifted to the new infrastructure, look for more potential consolidation in addition to what was gained in the early planning. Consider virtualizing the new infrastructure for even more efficient management and increased consolidation. 5. Use the Right Tools Make identifying and implementing the right tools a priority early on. Identify the Unix applications that are candidates for migration and then create the necessary migration processes, templates, and methodologies for each. 6. Form Migration Teams Managing the complexity of the migration project itself is essential. A good way to ensure that planning and implementation are well designed is to develop teams devoted to the major parts of the migration plan. Each team involves all aspects of business and IT operations, so line-of-business and technical personnel can bring their insight of the overall business needs into the process. Roles, responsibilities, and authority must be appropriately assigned, and teams must be able to handleand are accountable for the work. 7. Create a Communications Plan and Infrastructure Because teams implement the migration plan,
communication is key to making it a success. But communication involves more than memos and the occasional meeting; tools that enable a constant flow of communications are required. Dell has used intranet portals to improve communication among all teams connected to a particular aspect of the migration. Any communications approach should include everyone involved in the migration. That way, teams stay up-to-date and can raise issues that might otherwise be overlooked. 8. Use Proof of Concepts and Pilots Start with smaller more manageable blocks of migration to show that the process can work. It gives those in charge of migration a chance to find problems early on and develop solutions that will speed later work. Early success and savings build confidence among managers and make it easier to get the resources necessary to continue.