Hardcore best practises on virtual infrastructure design using VMware products by VMguru.nl.
Some highlights:
vCenter
Physical or virtual
If your virtual infrastructure is well designed and fully redundant there is no limitation why you shouldn’t run your vCenter on a virtual server, besides the number of ESX hosts. It is fully supported and by running vCenter on a virtual machine, you can profit from all benefits a virtual infrastructure can deliver. The only limitation is the number of ESX hosts you have to manage. In a large environment a physical vCenter server is recommended, high availability can be achieved by using vCenter Server heartbeat. The the sizing below to determine to go virtual or physical.
Sizing
- less than 10 ESX host:
- virtual server;
- 1 vCPU;
- 3GB of memory;
- Windows 32 0r 64 bit operating system.
- between 10 and 50 ESX hosts:
- virtual server;
- 2 vCPUs;
- 4GB of memory;
- Windows 32 0r 64 bit operating system (64 bit preferred).
- between 50 and 200 ESX hosts:
- Physical or virtual server (virtual preferred);
- 4 vCPUs;
- 4GB of memory;
- Windows 32 0r 64 bit operating system (64 bit preferred).
- more than 200 ESX host:
- Physical server;
- 4 vCPUs;
- 8GB of memory;
- Windows 64Bit operating system.
Storage
Spindles and RAID levels
With regards to storage spindles are key, more spindles equals more performance. The second item dictating performance is the RAID level. RAID levels are very important when designing a virtual infrastructure. When configuring storage it’s a compromise between capacity, performance and availability. These choices can make or break storage performance. Slower SATA disks in RAID10 can outperform faster SAS disks in RAID5. So the bottom line is, make sure your VMFS storage gets the best performance and all other storage gets the performance, availability and capacity it needs. So know your I/O characteristics.
Number of VMs/LUN
You’ll be surprised how many virtual infrastructure I encounter with only one extremely BIG LUN which contains all virtual machines. Most of the time, with this config, the end user is not satisfied with the performance. When I talk to them and propose to chop up their big LUNs into several smaller ones to improve performance, most of the time the reaction is one of disbelief. When I give them one smaller LUN and let them put a poor performing virtual machine on it, the discussion is over 9 out of 10 times.
This is the reason VMware best practices advices not to put more than 16 to 20 server VMs or 30 to 40 desktop VMs on a LUN. Personally I like to keep the lower values, so a maximum of 16 server VMs per LUN.
LUN size
When limiting your design to 16 server VMs per LUN and obey the other VMware best practices like space for snapshots, clones and +/-20% free space the recommended LUN size is between 400 and 600 GB.
VMDK or RDM
When designing a virtual infrastructure and determining LUN size it’s a waste to fill a datastore with one virtual machine. In almost every design I keep to the following personal best practice: For every virtual machine disk larger than 20 to 50GB use a Raw Device Mapping (RDM).
In the past there have been discussions stating that RDMs have better performance but tests from VMware show that the performance difference is minimal and can be neglected.
Another reason to use RDMs over VMDK disks is the level of low level disk access/control and the need for SAN based features like snapshots, deduplication, etc. There are two compatibility modes, physical or virtual. The level of virtualization an application allows and the functional needs determine the compatibility mode. For instance in physical compatibility mode it’s not possible to use VMware snapshotting.
Source: http://www.vmguru.nl/wordpress/2009/11/virtual-infrastructure-best-practices/
|