Hello, I have an exchange server that we are about to vitalize.
In the physical world we have a pair of nics that are pointed to our iscsi network/san using the MS software iscsi so that the server can get to the san that host the DB and trans files.
When I vitalize this I would like to temporally keep that same setup.
In the guest OS we would have one vmnic for lan traffic and another to use the microsoft software initiator pointing to the san for the database and logs (ms initiator inside the vm)
Since I am low on ESX server nics, can I use my existing esx host pNics and vSwitch that is dedicated for our ESX host to san iscsi software/vmkernel, and add a port group to that vSwitch
Then link that guest os vNic to that Iscsi vmkernel vSwitch for the ms software iscsi.
I am concerned about network contentionby running the guest vms iscsi traffic over the same nics that the esx host is using for iscsi traffic.
This would be for a small tranision period. Week or so until all exchange data gets into VMFS
thanks
If you find this or any post helpful please award points
Yes, and no. True that you will only use 1 pNIC, but that's if you're connected to 1 target. If you have multiple targets, still being used with iSCSI, you will spread your connections to those targets over the pNICs available.
Yes, you can bind several pNIC to a vSwitch, and if you leave the policy for source port, then you can leave both as active. What I've typically done in low pNIC scenarios, is bind two pNICs into a vSwitch, and use that vSwitch for management traffic. I'll have my service console portgroup have NicA and NicB active in that order, and then I'll have my vmotion traffic with NicB and NicA active in that order. And you are correct, your method will work too, keeping traffic on one interface until a failure condition occurs.
-KjB
If your iSCSI network is routed, then yes, you can use your existing pNICs in the ESX host to allow the guest's iSCSI traffic to pass through. If the iSCSI traffic is not routed, then you'll have to have a pNIC on the segment that your iSCSI lives on.
The other strategy with portgroup and vNICs will work as well, provided again that your pNICs are on the correct segment and/or can route from those pNICs out to the iSCSI network.
As far as contention goes, you have to drive a lot of traffic to fill 1 GbE links. Not saying that can't happen, but it's not as simple as one may think. Remember, your access/distribution/core switches and routers are most lilkely inter-connected with those same 1 GbE links and they support your entire network. Still, you have a lot of OS's driving traffic, so keep an eye in virtual center and see how much traffic you are pushing, and to stay safe, increase the service console memory as well, to the maximum of 800MB, if you have the memory to spare.
-KjB
I just had a thought. Isnt it true that ESX only can use 1 Pnic for ESX host software iscsi traffic no matter what load balence policy you use or how many nics you bind to the vSwitch that the vmkernel port group is a part off? With that in mind I could bind two physical nics to that vSwitch.
On the ESX vmkernel iscsi port group I could have pNic 1 Active and Pnic2 in standby.
On the vm port group for OS iscsi network I could have Pnic2 Active and Pnic 1 in stand by.
That way under normal circumstances not 1 nic never has to due both at the same time. (push esx host iscsi traffic and Os iscsi traffic) Also as a bonus I am improving fault tolerance for both networks
Does that sound correct?
If you find this or any post helpful please award points
Yes, and no. True that you will only use 1 pNIC, but that's if you're connected to 1 target. If you have multiple targets, still being used with iSCSI, you will spread your connections to those targets over the pNICs available.
Yes, you can bind several pNIC to a vSwitch, and if you leave the policy for source port, then you can leave both as active. What I've typically done in low pNIC scenarios, is bind two pNICs into a vSwitch, and use that vSwitch for management traffic. I'll have my service console portgroup have NicA and NicB active in that order, and then I'll have my vmotion traffic with NicB and NicA active in that order. And you are correct, your method will work too, keeping traffic on one interface until a failure condition occurs.
-KjB
Hello,
However be aware that running VM based iSCSI initiators over iSCSI channels for VMFS could lead to interesting security issues. The best practice is to isolate VM traffic from Host traffic so that there is no chance a VM could access the VMFS. This would change your virtual networking somewhat. But can easily be solved by using VLANs.
Best regards,
Edward L. Haletky
VMware Communities User Moderator
====
Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.
CIO Virtualization Blog: http://www.cio.com/blog/index/topic/168354
As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization
So the source port load balence policy in regards to ESX software iscsi connections to luns works the same way as source port load balence works with Vswitch ports and nics?
So for example if I had 6 Luns that esx points to and 3 nics. It would go like this?
Esx host iscsi traffic to Lun1 would use Pnic1
Esx host iscsi traffic to Lun2 would use Pnin2
Esx host iscsi traffic to Lun3 would use Pnic1
Esx host iscsi traffic to lun4 would use Pnic 2 and so on until a nic fails?
If you find this or any post helpful please award points