Hi,
we are in the process of moving our datastores from local DAS to a shared iscsi SAN.
We will have two physical hosts (1 ESX 3.5 and one ESXi 3.5) with two iscsi appliances (1 DELL MD3000i and 1 openfiler 2.2 "in house-built") and two dedicated switches (a 16 ports GBit Dlink 1216T an an 8 port Gbit unmanaged Linksys).
Both iscsi appliances have redundand ethernet connections (Dell has 4 ports on 2 different controllers, openfiler has 2 ports), so I planned to distribute connections on the two switches for having redundancy.
each ESX and ESXi physical hosts have 2 ethernet NICS dedicated to iscsi
My doubts/questions are:
-is the linksys "desktop" unmanaged switch OK for this setup? Or should I get a second Dlink 1216T?
-I need to have from the dell MD3000i at least the performance we are having with local SAS DAS storage.
Should I enable "jumbo frames" on both switch and Dell MD300i? What about ESX and ESXi? and linksys unmanaged switch (will it work with thaat jumbo mtu)?
-If I DO NOT enable jumbos on openfiles iscsi, will it work with standard MTU even if it is connected to switches with jumbos enabled?
-should i "team" toghether the two ethernet iscsi connections on the ESX/ESXi physical hosts?
another doubt I have is this:
-currently on the ESX 3.5 host we have an iscsi LUN on openfiler device which is accessed using the Microsoft sw iscsi initiator inside a win2k03 VM. Can I disable the Microsoft iscsi initiator inside the VM, reconnect that LUN using ESX sw iscsi and have win2k03 find back all the data?
Thank you in advance
Guido
Take a lok at this for your specific question
http://communities.vmware.com/thread/107173
And for a broad overview of ISCSI on ESX read this, when you get a chance.
-is the linksys "desktop" unmanaged switch OK for this setup? Or should I get a second Dlink 1216T?
For Jumbo frames support you would need a switch that supports Jumbo frames.
-I need to have from the dell MD3000i at least the performance we are having with local SAS DAS storage.
Should I enable "jumbo frames" on both switch and Dell MD300i? What
about ESX and ESXi? and linksys unmanaged switch (will it work with
thaat jumbo mtu)?
Check the following link for info on Jumbo Frames an ESX
If you can get it to work you may want to compare performance of DAS and ISCSI for a time before fully committing to the move.
-If I DO NOT enable jumbos on openfiles iscsi, will it work with standard MTU even if it is connected to switches with jumbos enabled?
Sure
-should i "team" toghether the two ethernet iscsi connections on the ESX/ESXi physical hosts?
No point really. ISCSI connections cannot be aggregated in this way. For failover sure but to gain more bandwidth it will not help.
another doubt I have is this:
-currently on the ESX 3.5 host we have an iscsi LUN on openfiler device
which is accessed using the Microsoft sw iscsi initiator inside a
win2k03 VM. Can I disable the Microsoft iscsi initiator inside the VM,
reconnect that LUN using ESX sw iscsi and have win2k03 find back all
the data?
Why get rid of the MSFT initiator? One of the advantages of the software initiator is it makes that storage much more portable. You should be able to connect the ISCSI LUN to the win2k03 VM as a RDM if you want.
-should i "team" toghether the two ethernet iscsi connections on the ESX/ESXi physical hosts?
*No point really. ISCSI connections cannot be aggregated in this way.
For failover sure but to gain more bandwidth it will not help.*
Ok, now I understand that I cannot aggregate iscsi connections to have more bandwidth.
To aggregate them for failover should I do these steps (?):
-connect the two physical adapters (vmnic1 and vmnic2) to the same vswitch (vswitch1)
-vswitch1 / properties / edit / NIC teaming -> how should I set up parameters here?
-port group / properties / edit / NIC teaming> how should I set up parameters here?
consider that I have the two phisical NICS connected to different physical switches
Thanks a lot
Take a lok at this for your specific question
http://communities.vmware.com/thread/107173
And for a broad overview of ISCSI on ESX read this, when you get a chance.