Hello,
I am looking for a networking best practice for using 4 - 1GB NIC's with vSphere 5. I know there are a lot best practices using 10Gb, but our current config only supports 1Gb. I need to include Management, vMotion, Virtual Machine (VM), and iSCSi. If there are any others you recommend, please let me know.
I found a diagram that looks like what I need, but it is for 10Gb. I'm thinking this will work..
(I got this diagram HERE - Rights go to Paul Kelly)
My next question is how much of a traffic load does each object take through the network, percentage wise?
For instance, "Management" is very small and the only time it is being use is during an agent installation. Then it uses 70%.
I need the bandwidth percentage, if possible.
If anyone out there can help me out, that would be so great.
Thanks!
-Erich
Without knowing your environment it would be impossible to give you an idea of the bandwidth usages.
Having said that if you had about 10 to 15 VMs per host with that configuration you should be fine.
Sent from my iPhone
With only four ports and the need for iSCSI it will be a bit tight. Certainly possible, but less nice and kind of unpredictable performance.
Do you have any possibility to add one more physical NIC with at least two ethernet ports? The configuration would be much more easy with those extra ports.
iSCSI will always go on dedicated ports, and you will want two of them.
As stated above, ideally you'll have a few more to work with, but in the absence of that, the choice is simple - 2 x NICs for management, vmotion and VM traffic.
Edit: I wouldn't agree with that diagram either. There's no reason to configure explicit failover mode and have one port in standby (iSCSI is a different argument), when you could be teaming. Especially with something as high end as a Nexus on the other end.
I wouldn't recommend this configuration, but if you cannot get more then 4 NICs the only way I can think to do this would be;
vSwitch0
Management - vmnic0 active / vmnic2 standby
VM Network - vmnic0 active / vmnic2 standby
vMotion - vmnic0 standby / vmnic2 active
vSwitch1
iSCSI1 - vmnic1 active / vmnic3 unused
iSCSI2 - vmnic1 unused / vmnic3 active
@Josh
Pinning the traffic to specific uplinks is the only way to absolutely guarantee that under normal working conditions no type of traffic with impact any other type of traffic. This diagram was made for those IT workers that have high end servers and 10GbE networking but cannot get management to buy Ent+ licensing and have no other way of limiting traffic type on their network. This is actually a very common occurrence and was a design frequently requested which is why it was created. The best option is of course to have Ent+ licenses and use all links for all traffic types in conjunction with SIOC, NIOC and LBT.
Regards,
Paul
Hi guys,
Thaks for all the help. This is very useful information. Thank you Paul for your diagrams. As for now, the only card we have is the 4 port gigabit. This is all on our Dev environment so when we transfer over, we may have a server with 6+ network ports, but that's not guaranteed. We are trying to see if we can get everything to work with this setup (4-port NIC 1GB).
Paul,
How much bandwidth will Management, VM Network, vMotion, and iSCSI use? Is it possible to get a rough estimate on the percentage of bandwidth? I know with what I am dealing with is pretty tight.
Thanks again for the assistance,
-Erich
Without knowing your environment it would be impossible to give you an idea of the bandwidth usages.
Having said that if you had about 10 to 15 VMs per host with that configuration you should be fine.
Sent from my iPhone
If you don't want to open that.. here's a PNG of the diagram..
You left out vMotion in that diagram.
Sent from my iPhone
Putting vMotion on the same vmnics as Management seems to be the most reasonable here.
Depending on the number of VMs and their need for network bandwidth you might be forced to let them use the same vmnic as Management/vMotion too.
Ok, I added vMotion. However, I don't think we'll be using it. That's why I left it out.
Rev. 2
Message was edited by: Norgenator ** Added: Rev. 2 drawing
Yet another question..
Is it necessary to have 4 uplinks for the 2 switches? If so, why is that?
Do the switches have to be managed? The switches I have are unmanaged.
Is it necessary to set up anything in the switches for VMWare VLAN Tagging?
Thanks,
Erich
Norgenator wrote:
Ok, I added vMotion. However, I don't think we'll be using it.
If you have the license, why not configure it? It could be something that you use once every month, but would still be a great feature and value-add for most environments.
You're right. I guess you never know when you made need it. I'll have the option available. Thanks.
Erich wrote:
Is it necessary to set up anything in the switches for VMWare VLAN Tagging?
Thanks,
Erich
If you are doing VLAN tagging, and the switch doesn't support those VLANs, what is the switch doing with those tags?
You really have no way of using VLANs effectively if you don't have a managed switch.
Thanks everybody for all the help. All this reallyt helped a lot!
-Erich
With only one server, why bother with vMotion?