I have blades with 6 NICs, NetApp SAN
There is probably a requirement for VM's to have iSCSI access direct to storage. VCB proxy using HP dataprotector for backup
I'd like to use NFS if suitable given the ability to access backup snapshots easily rather than restore from tape or restore LUNs
So I'm thinking:
1 NIC Service Console
1 NIC Vmotion
2 NICS teamed for VM network access
2 NICs for NFS/iSCSI
Can iSCSI from VMs run over the same vswitch and physical NICs as ESX access to storage using NFS?
Does this sound OK or would going with iSCSI datastores be a better option?
Hey,
I think the above comments have covered the first vswitch.
Now with the others...
ISCSI traffic from a VM must go over the 'Virtual Machine portgroup'. So if you install, e.g. MS ISCSI initiator in a guest, that traffic will go over a VM port group.
ISCSI traffic from ESX goes over the VMKernel/Service Console port groups'. If you're using the ISCSI software initiator, dont forget you will need SC access to the ISCSI target.
If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points
~y
>1 NIC Service Console
>1 NIC Vmotion
Use instead 1 vSwitch, with 2 NICs and change the NIC order at portgroup leve.
>Can iSCSI from VMs run over the same vswitch and physical NICs as ESX access to storage using NFS?
Yes, but if you have more NICs could be better have different vSwitches.
Andre
**if you found this or any other answer useful please consider allocating points for helpful or correct answers
Yes of course it can.... But there's a performance penalty if and when you've got everything running on these two nics. I can imagina that you've got a constraint in terms of max nic ports given the max of 6 this would be the best solution indeed. Keep in mind that you have a standby nic for both the SC and VMotion. (In other words create a single vSwitch for SC+VMotion and create an active/standby situation)
Duncan
VCDX | VMware Communities User Moderator
-
If you find this information useful, please award points for "correct" or "helpful".
So if I use 2 NICs for the SC and VMotion using preferred NICs so they have a standby isn't there VLAN issues there?
i.e. Wouldn't SC and VMotion have to be on the same VLAN?
If you want you can use VLAN, but is not necessary. Two different logical network can exist over the same vSwitch.
Andre
**if you found this or any other answer useful please consider allocating points for helpful or correct answers
Hey,
I think the above comments have covered the first vswitch.
Now with the others...
ISCSI traffic from a VM must go over the 'Virtual Machine portgroup'. So if you install, e.g. MS ISCSI initiator in a guest, that traffic will go over a VM port group.
ISCSI traffic from ESX goes over the VMKernel/Service Console port groups'. If you're using the ISCSI software initiator, dont forget you will need SC access to the ISCSI target.
If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points
~y
So I just add a Virtual machine port group to the vswitch with NFS and then add vNICs in that port group for VMs that require iSCSI access?
That is one option, or you could use a different NIC or use VLAN tagging etc...
Why does your VMs need iscsi access? Can they not use a traditional vmdk?
If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points
~y
They may want to use NetApp snapmanager products that require either FC or iSCSI LUNs within VMs for backups. i.e. Exchange, SQL, Oracle
So I just add a Virtual machine port group to the vswitch with NFS and then add vNICs in that port group for VMs that require iSCSI access?
This is exactly what I do, it works fine.