Hey guys,
We are currently in the process of adding 2 new hosts (HP BL460c G8) to our 4 hosts (HP BL460c G7) cluster. The hosts have the following configurations :
- 12 cores (2x Intel Xeon 6 cores)
- 192GB of memory (G8 have 256GB)
- 6 nics (2 + 4 with mezzanine)
- 2x 146GB mirror for OS
- 2 FC HBAs
I just upgraded vCenter, vSphere and Update Manager to the latest version (5.1 U1). We run on average from 40 to 60 VMs per host and never had a issue.... CPU is always around 15-20% and memory around 50-60%. Exchange and SQL are on physical servers.
We use 4 nics for VM traffic (4 VM portgroups (with 3 active nics) + 2 VMkernel portgroups(1 active nic + 1 standby) and 2 nics for V-Motion and FT.
I was recently asked to see if we could remove 2 switches (Cisco 3020) from the enclosure and have a viable environment with only 4 nics. The network guys are telling me that the traffic is really not that bad on 3020s... We have occasionnal peaks but average traffic for the past year in general is fairly low. This is all with 1GB nic and switches.
I was thinking of the following config if we go down to 4 nics :
- 2 active nics for VM Traffic (Ideally one onboard / one on the mezzanine)
- 1 active nic for vMotion / Mgt traffic with the other nic (used for FT) in standby
- 1 active nic for FT / Mgt traffic with the other nic (used for vMotion) in standby
Am I missing something? To be honest... I would rather keep the 6 switches/nics thinking we'll have the traffic for it in the future but the management here want to know if it's feasible with only 4 for the time being.
Thanks in advance.
So, you have some requirements with constraints here. The questions are to get us thinking, not necessarily to have a definitive answer.
Management has asked if a (4) NIC setup is feasible. So, let's look at the traffic types, assuming that redundancy is a requirement.
Required Traffic Types:
Optional Traffic Types:
Based on the limited information listed here, I agree with the proposed NIC layout if 4 NICs is feasible based on investigation. (Of course more NICs gives more flexibility. 10GbE gives you even more options.)
VMNIC numbers here are just placeholders.
Also, you had (4) hosts and you are adding (2) more. Your cluster has just grown by 50%. How often does this happen? Are you expecting even more workloads in the next 6 to 12 months? I would say that you could definitely make a strong case for Enterprise Plus licensing for distributed switching alone, not to mention the other features you'll receive.
I hope this helps.
Don't forget to mark this answer "correct" or "helpful" if you found it useful (you'll get points too).
Regards,
Harley Stagner
VCP3/4, VCAP-DCD4/5, VCDX3/4/5
Website: http://www.harleystagner.com
Twitter: hstagner
Anyone?! :smileyconfused:
4 NIC could be limited... but of course it dependes by your traffic.
IMHO if you really use FT a 1 Gbps is a big limit (you need a 10 Gbps). If you don't use it just share with the vMotion interface.
Also if you have distribuited virtual switches consider to enable NIOC to handle priority and bandwith
We checked the traffic and considerably low with a few peaks from time to time.
FT is currently configured on one NIC but we don't use it.... but I will take your recommendation and wait until we get 10 Gbps.
We don't have VDS but I'm pushing for it... It would simplify our vnetwork quite a bit especially with all the new hosts that we are adding.
So considering we have no traffic issue, a 4 nics solution seems possible? Interesting...
Thanks a lot for you input!
Jon
If you want to use FT then you will need 6 uplinks. If you are happy to drop FT off then you can use 4 uplinks. The configuration would be;
vSwitch0 - Management
Management - vmnic0 active / vmnic3 standby
vMotion - vmnic0 standby / vmnic3 active
vSwitch1 - VM Networking
VM - vmnic1 active / vmnic2 active
Where vmnic0,1 are using inbuilt NIC ports and vmnic2,3 are using mezz NIC ports.
You could put all uplinks and networks on a single vSwitch but in my experience this makes maintenance harder and more complex. Also you need more complex active/standby NIC configurations.
Regards,
Paul
Remember also that with vSphere 5 you can use multiple NICs for vMotion.
Is not the same (yet) with FT... so in this case a dedicated and fast NIC is quite mandatory.
if you have vSphere Enterprise+ you could consider having two NICs and Network IO Control to manage the traffic. But these really should be 10G NICs then.
Regarding FT - it probably is a good idea to have dedicated NICs here, especially if the apps inside the VM are busy. However, a dedicated FT-related network infrastructure can have a significant impact on cost, so you may want to have a separate charge back mechanism and a respective SLA which truly reflects the power of FT (always on, like a hardware cluster). FT makes sense for applications which are truly mission critical and when the VM only needs 1 vCPU (as of vSphere 5.1). Examples could be home grown applications or any other apps which run on 1 vCPU and where HA is not good enough to provide sufficient uptime.
My 2c.
So, you have some requirements with constraints here. The questions are to get us thinking, not necessarily to have a definitive answer.
Management has asked if a (4) NIC setup is feasible. So, let's look at the traffic types, assuming that redundancy is a requirement.
Required Traffic Types:
Optional Traffic Types:
Based on the limited information listed here, I agree with the proposed NIC layout if 4 NICs is feasible based on investigation. (Of course more NICs gives more flexibility. 10GbE gives you even more options.)
VMNIC numbers here are just placeholders.
Also, you had (4) hosts and you are adding (2) more. Your cluster has just grown by 50%. How often does this happen? Are you expecting even more workloads in the next 6 to 12 months? I would say that you could definitely make a strong case for Enterprise Plus licensing for distributed switching alone, not to mention the other features you'll receive.
I hope this helps.
Don't forget to mark this answer "correct" or "helpful" if you found it useful (you'll get points too).
Regards,
Harley Stagner
VCP3/4, VCAP-DCD4/5, VCDX3/4/5
Website: http://www.harleystagner.com
Twitter: hstagner