Building Home Virtualization Lab on a Budget

After starting couple of projects at work which involved server consolidation through virtualization, I went on a quest to transform my home lab into a virtualization lab. Since I had a Dell Poweredge SC440 server as a domain controller along with mixture of windows 7 and windows XP Professional workstations, I decided to purchase similar dell server and a Network Attached Storage (NAS) central storage to simulate iSCSI Storage Area Network (SAN). I bought the server few weeks ago and upgraded the existing one with Intel Xeon processor, couple of Intel Gigabit ET Dual Port Server Adapters and memory, more on the specs below. I need to decide on a NAS unit as my existing NAS from Netgear (readynas NV+) would not work, nor do I want it to work as it has important data that is more important than a lab. It also would not allow me to use it as an iSCSI target. On the networking side, I've decided to use Cisco 3750G switches from my dismantled CCIE Lab. There are tons of deals out there on cheap hardware that you can use to build your virtualization lab, test things out, break things by making silly mistakes to make sure you don't make same mistakes when working in a production environment at your job or client. You CAN run virtualization hypervisors such as VMware and XenServer on any hardware but my goal is to use the lab for my "home production" environment as well as test out new operating systems and applications plus use it a way to help me train for those applications/OSes. Therefore, the need to have couple of decent servers to deploy at least 5 VMs each and test business continuity solutions such as High Availability (HA) and Fault Tolerance (FT). Moreover, I am studying for my VCP exam certification and having hands-on training on vSphere 4.0 is extremely helpful and thats how I like to train my self. I've finished reading the Mastering vSphere 4.0 book by Scott Lowe and building the lab at home as well as implementing virtualization at work using enterprise class hardware such as Dell R710s, good $20k+ SAN should help build up that confidence and expertise. Anyway, more on work project some other time but for now more details on my home lab.

The Virtualization Lab Hardware
2 Dell PowerEdge SC440 Servers, with these specs:

   

Chassis: Tower Installed Processor(s): One Processor Type: Intel 3060 Xeon Dual-Core, 2.4 GHz, 4MB L2 Cache Installed Memory: 4GB One of the servers has 5GB RAM, I plan on upgrading the RAM to 8GB soon. I know, dell documents only show 4GB as maximum but its possible to run 8GB on these servers, which is an added bonus works out really well for home use.

           

Memory Type: PC2-5300 667MHz Memory Slots: Four Hard Drive: 80GB SATA 7.2K Max No. Hard Drives: Two HDD Interface: Embedded SATA RAID Controller: N/A Remote Access Controller: N/A Optical Drive: DVD/CDRW Floppy Drive: 1.44MB Network Interface: Single embedded Gigabit NIC +2x Intel Gigabit ET Dual Port Server Adapters (82576 chipset) Power Supply(s): One USB Ports: 2 Front / 5 Back

The two that I am interested in are: Thecus N77000PRO and QNAP TS-809 PRO. So. I like to buy NAS units in diskless configuration and buy hard drives cheaply on as needed basis. It has 4x 750GB hard drives. Home Lab before the upgrade. plan it now. My existing Netgear's ReadyNAS is almost out of space. I do not need to start with 5. you can literally hear the datastore being accessed. 2 1TB or 2TB HDs in RAID-1 should do the trick.            Serial Ports: One (9-Pin) Parallel Ports: N/A Expansion Slots: PCI Express x8 (with x16 connector). 5v Graphics: Integrated VGA Keyboard Port: USB Mouse Port: USB Compatible Operating System: See Dell Website Operating System: None Included Additional Software: None Included Networking Cisco 3750G and 3560 Switches Cisco 2611XM router (soon to be upgraded to ASA5505) Intel Gigabit ET Dual Port Server Adapters Storage To be able to test enterprise features of VMware or Citrix XenServer. N7700PRO has an additional PCI-x slot for 10GbE NIC!!! I am still working on this and need to decide which one to go with. you will save money and in the process thank virtualization. it is imperative to have central storage for your virtual machines and for the advice. the only thing I will lose is the test/lab data and not my important stuff on netgear's readynas. I've decided to invest in new NAS unit because if anything does go wrong. The major factor in all this is the cost as the NAS would be around $1100+ and then add cost for hard drives. Here is visio diagram of the network layout: . PCI Express x4 (with x8 connector). about 2TB of space in RAID-5. 6 or 8 hard drives. I've tried using NFS as datastore on vmware and the constant access to VMs produces too much I/O on the hard drives and when its sitting in the same room. to be on the safe side. That really drives down the cost of the NAS over time as you are not buying 10TB of space that you will not use in the next 2-4 years so why not buy hard drives when you need the space? No doubt that hard drive prices have come down over the past few years but think about what hard drives use to cost and what they cost now and what they will be in year from now? Try it. PCI Express x1. put the best hard drives in your wishlist and buy them in year or so when they are at least 45% cheaper. Two PCI 32 bit/33MHz. I am looking at a NAS which also has iSCSI target capability to simulate an iSCSI SAN.

Currently. The registration is free. including the domain controller. Do visit the Virtualization Forum and post your questions. Has anyone tried this? It should work and could turn up the notch as far as ROI is concerned on the hardware. geeks and new) to share ideas and their network setups with others to build stronger. One of the things we will try to do here on virtualization. more stable virtualized networks. I could install 4 hard drives each with 2 RAID-1 goups and install Citrix XenServer on that. windows server 2008 R2 64bit with vCenter database. answers and is to allow users (professionals. It would be nice to switch between two dominant hypervisors on as little hardware as possible. I have one Vmware ESXi host with 3 VMs on it.Other thoughts and Ideas The servers have one 80GB SATA hard drives. .