I read that there are issues with this and I'm fairly positive I read that it's still not supported in one of the ESX4u1 documents (though for the life of me I can't find that blurb this morning). Is this still an issue? And if so, can anyone explain why exactly this does not work?
I'm documenting a proposed config for a deployment utilizing dual 10GbE cards and am researching using both iSCSI multipathing and dvSwitches. That begs another question, is multipathing even necessary over a 10GbE pipe?
-Justin
If you are speaking of using port-binding, then you have to use legacy switches for that purpose. It is in the vSphere release notes
vExpert
VMware Communities Moderator
-KjB
Actually I would say with the new Network I/O features within 4.1 I would almost say its encouraged
And I would say that Multipathing is necessary, if not for bandwidth then for redundancy
If you are speaking of using port-binding, then you have to use legacy switches for that purpose. It is in the vSphere release notes
vExpert
VMware Communities Moderator
-KjB
That's what I was looking for, thanks! vmknic port binding is required to configure multiple paths to iSCSI storage, so that explains it.
How does multipathing provide redundancy that a single vswitch with multiple uplinks does not?
-Justin
You wouldn't use the port binding approach for redundancy sake alone, as you say, the vSwitch with multiple uplinks will accomplish that. You would use port binding to add capacity/throughput, above and beyond the typical redundant connection scenario.
vExpert
VMware Communities Moderator
-KjB