Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Standard view
Full view
of .
Save to My Library
Look up keyword
Like this
0 of .
Results for:
No results containing your search query
P. 1
Wipro Case Study

Wipro Case Study

Ratings: (0)|Views: 215 |Likes:
Published by rsrao70

More info:

Published by: rsrao70 on Jun 23, 2010
Copyright:Attribution Non-commercial


Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less





High Performance Computing at theInstitute for Plasma Research.
– A Case Study
Customer Overview:
 The Institute for Plasma Research can trace its roots back to the early 1970's when a coherent and interactiveprogramme of theoretical and experimental studies in plasma physics with an orientation towards understandingspace plasma phenomena was established at the Physical Research laboratory. IPR is today internationally recognizedfor its contributions to fundamental and applied research in plasma physics and associated technologies. It hasscientific and engineering manpower of 200 with core competencies in theoretical plasma physics, computer modeling,superconducting magnets and cryogenics, ultra high vacuum, pulsed power, microwave and RF, computer-basedcontrol and data acquisition and industrial, environmental and strategic plasma applications.
The research areas can be broadly categorized into three activities:
• Studies on high temperature, magnetically confined plasmas• Basic experiments in plasma physics including free electron laser, dusty plasmas and other nonlinear phenomena• Industrial plasma processing and application
Current IT infrastructure:
 The institute has a computational facility involving a widenetwork of roughly 500 machines, comprising workstationsand PCs. A fast ethernet network comprising layer 3 andlayer 2 switches with a mix of copper and fibre connectivityis also provided across the institute. The institute is alsopart of an extensive grid - GARUDA - the national gridinitiative from CDAC. The institute also has a 34 node Linux cluster on Gigabitinterconnect, based on Single core and Single socketcompute nodes.
Some of the key hardware and software being used:
Hardware: Cray X1E, Sunblade 2000, Sun Ultrasparc, HPand Dec-Alpha workstations, Linux servers, Linux cluster.Operating systems: Unicos/MP, Sun Solaris, IBM AIX, HP-UX, several flavors of Linux, Digital Unix 4.0 D and differentversions of Windows.Compilers & Applications software - C and Fortran compilersfrom various vendors, ANSYS, NAG, NCAR, MATLAB,IMSL, Serenade, Ansoft, Tornado, IDL, CICA, Visions etc.
IT Requirement:
IPR wanted to deploy a 32 node cluster based in IntelCore Microarchitecture. The cluster was to have low latency,high bandwidth 4x DDR Infiniband with a bandwidth of 20gbps to be set up. The vendor was to deliver end to endcluster infrastructure based on two CPU-based Intel Serversand 96 port Infiniband Switch. This cluster had to bescalable to a larger one in the future and hence the switchinginfrastructure was to be deployed with greater expandability.Besides the hardware, the cluster provider must alsoprovide with the necessary Cluster software suite thatcomprises of Cluster Managers, Schedulers, Debuggers,MPIs and other Libraries and Cluster tools that couldenhance the performance and make the cluster easilymanageable. The applications were primarily involving the following areasof work;1. Computational Fluid Dynamics2. Electrodynamics3. Hydrodynamics4. Differential Equations5. Linear Matrices
 The challenges at the customer end were;1. Some of the existing codes that use MPI and whenclustered using Ethernet the scalabilty had a ceiling at 16CPUs. This needed to be scaled up using a Low Latency,High Bandwidth Interconnect and hence Infiniband is onekey choice.2. Since the existing cluster was based on Single Core,Single Socket nodes, the larger problem sizes and codestook longer time and thus turn around times (TAT) werelarger and hence the need for a cluster that will be fasterso that TAT could be reduced.3. Some of the applications are such that a linear scale upfrom Single Core Single Socket to Dual Core Dual Socketcannot be expected.4. The efficiency of the cluster has been targeted by thecustomer to be close to 70% to Rpeak (Theoretical GFLOPperformance)5. Some of the open source cluster tools such as Rocksand OSCAR cannot be scaled to larger clusters withoutcustomization and tuning.Besides the above challenge in the application front, thekey challenge was to win the solution against the likes of IBM, HP and Dell in an Open tender situation.
 The schematic of the solution provided is given below:
Wipro delivered this solution in association with their exclusive cluster solutions partner Z Research( www.zresearch.com ). The key components of the solution involved the following;1. Wipro NetPower Serversa. Master Node
1 Nob. Compute Nodes
32 Nodes based on two CPUS of Intel Xeon 5160 in each node.2.Networked Storage
From Head node using GlusterFS3.GlusterHPC Suite from Z Research with the following components in the SuiteCluster Provisioner, Mvapich-gen2, Mpich2, OFED-1.1, OpenMPI, Torque,C3, SLURM, Conman, PDSH, LAM, Cerebro, Ganglia, Genders, Autologin,GlusterFS / NUFA, GNU FreeIPMI4. SilverStorm Infiniband Switch and HCAs (Host Channel Adapter) with built in subnet manager and redundantcontrollers, cooling and Power.5. OS: Scientific Linux 4.4, EM64T
Connectivity Diagram for 32 Node HPC Cluster
Silver storm InfinibandCluster Interconnect Switch.InterconnectNetworkMaster Node( Wipro NetPower Z2205 )
Compute Nodes ( Wipro NetPower Z2105 )

Activity (11)

You've already reviewed this. Edit your review.
1 thousand reads
1 hundred reads
vijayang liked this
dkamble17 liked this
vatsgaurav liked this
mitesh@123 liked this
finmine liked this
azzukan liked this
vjann liked this

You're Reading a Free Preview

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->