If setting up the guest operating systems with internal iSCSI initators, how do you deal with accessing the iSCSI network?
As in, it seems to make sense to have a separate iSCSI storage network which the guests can not normally reach. But if some of the guests needs to run iSCSI internal to access some LUN on the same SAN as the VMFS datastores are on, how should this be configured?
I am thinking about adding a second virtual NIC to the guests and put that vNIC on the Storage Network portgroup with correct VLAN, as that would make it at least restricted to a smaller number of guest reaching the iSCSI net. Would that be a possible solution?
How would you protect the VMFS iSCSI LUNs from being directly accessed and possible destroyed by the guests? CHAP or some SAN security feature?
I am thinking about adding a second virtual NIC to the guests and put that vNIC on the Storage Network portgroup with correct VLAN, as that would make it at least restricted to a smaller number of guest reaching the iSCSI net. Would that be a possible solution?
Yes, another vNIC is actually the only way to connect to the storage.
How would you protect the VMFS iSCSI LUNs from being directly accessed and possible destroyed by the guests? CHAP or some SAN security feature?
You would do this on the storage side. Depending on the storage, create hosts or host groups and present the different LUNs only to the hosts which should see them.
André
I am thinking about adding a second virtual NIC to the guests and put that vNIC on the Storage Network portgroup with correct VLAN, as that would make it at least restricted to a smaller number of guest reaching the iSCSI net. Would that be a possible solution?
Yes, another vNIC is actually the only way to connect to the storage.
How would you protect the VMFS iSCSI LUNs from being directly accessed and possible destroyed by the guests? CHAP or some SAN security feature?
You would do this on the storage side. Depending on the storage, create hosts or host groups and present the different LUNs only to the hosts which should see them.
André
Thank you for your reply.
Yes, another vNIC is actually the only way to connect to the storage.
Well, in theory you could place the guests and the storage on the same L2 network, or perhaps route between the guest and storage VLANs, but it would of course not be a good idea.:) But fine to know that second vNIC is the preferred way.
You would do this on the storage side. Depending on the storage, create hosts or host groups and present the different LUNs only to the hosts which should see them.
That would be based on IP adresses then? Is it often a possibility to be able to do that on a SAN with iSCSI support?
ricnob,
Yes you can use the same iSCSI network but you will need to make a seperate port group:
Give this a read as it has helped others:
http://communities.vmware.com/thread/194113
also:
Regards,
Chad King
VCP
"If you find this post helpful in anyway please award points as necessary"
ricnob,
Yes you can use the same iSCSI network but you will need to make a seperate port group:
Yes, there would have to be a virtual machine portgroup with the correct storage network VLAN id configured. Most likely located on the same vSwitch as the iSCSI vmkernel port is on, so all iSCSI traffic shares the same pNIC(s).
Sorry I missed the neccessary VM network port group you need on the vSwitch in my first post.
That would be based on IP adresses then? Is it often a possibility to be able to do that on a SAN with iSCSI support?
Yes, each storage system I know supports LUN masking.
André
Yes, each storage system I know supports LUN masking.
Thanks André. It is called LUN masking to do this on iSCSI (as FC LUN masking)?
I guess it is the same principle, a WWN name from a FC HBA or the IP address from a NIC. If being paranoid I guess both these could be changed in software by someone wanting to gain access..
I guess it is the same principle, a WWN name from a FC HBA or the IP address from a NIC.
Yes.
If being paranoid I guess both these could be changed in software by someone wanting to gain access.
For FC someone would need access to the FC switch. For iSCSI you can either configure a separate (non routed) network or use CHAP. Or both.
André
Thanks for your help with this.
One final thing, do you have any feeling of how the storage performance would be for the guest using its own iSCSI inintiator compared to the Vmkernel one? Lets say a Windows 2003 or 2008 with the default Microsoft iSCSI client. Is the MS client better or worse than the Vmkernel and does the networking from the guest vNIC produce more overhead?
Software initiated iSCSI always causes for more CPU overhead. Either way I dont think its an improvement whether it runs on the host or the VM. You will just have to decide where you want your overhead to be. Personally I would just use the MS client to avoid adding more cycles to the actual esx host since you will already be allocating CPU to the VM using the MS iSCSI Client. As far as networking goes along it just depends on the amount of I/O you are going to putting through the thing. You will also have to make sure you are using GB NIC's for the vmnic on the VM. Usually when I use iSCSI I end up bench marking the amount of I/O with I/O meter just to be on the safe side. If there isn't going to be a lot of load then I wouldn't even worry about it.
Regards,
Chad King
VCP
"If you find this post helpful in anyway please award points as necessary"