VMware Cloud Community
linnallen
Contributor
Contributor
Jump to solution

Isolated SAN Network - first SAN

We are fairly new to ESX and installing our first SAN (Equallogic PS series iSCSI). ESX installed to boot from SAN thru Qlogic HBA was succesful. Documentation we have found says to have SAN on a seperate isolated network which we have done using seperate ESX NIC's, switch, and a SAN subnet. We were also told it was best to use the MS iSCSI initiator to connect to the SAN from all created VM's on the ESX to create data volumes for the VM's to use. Having trouble with config and can't seem to find "Best Practices" or "Layout Config" docs for an ESX isolated SAN network. Ideas and docs would be a great help.

0 Kudos
1 Solution

Accepted Solutions
Jae_Ellers
Virtuoso
Virtuoso
Jump to solution

Each vm will have to have a "production" and a "storage" nic.

Since the storage vlan/network isn't routed thru production you will have to have 2 physical connections to the ESX servers.

This means you'll need to create a vswitch and portgroup for each physical connection. You will then need to create 2 virtual nics in the vms and connect them to the portgroups.

Then you assign your IP addresses. At this point you can ping the iSCSI device. Then you get your icqn # from the initiator. Then you plug those into your initiator groups on you iSCSI san. Then you present your luns to the i-groups. Then you can mount your iSCSI luns inside your vms.

You can do the same thing with fewer physical connections and trunking. This design is OK and does reflect the state of iSCSI today. It's obvious there is a lot of room for improvements. Running the initiator in the client basically causes extra overhead and should be less efficient. But testing shows it's faster today. So that's why this is being recommended. It would be interesting to see how this scales and how it changes over time as IO efficiency is optimized in ESX.

-=-=-=-=-=-=-=-=-=-=-=-=-=-=- http://blog.mr-vm.com http://www.vmprofessional.com -=-=-=-=-=-=-=-=-=-=-=-=-=-=-

View solution in original post

0 Kudos
16 Replies
virtualdud3
Expert
Expert
Jump to solution

What kind of troubles are you having?

Here is a link that might help you out:

http://www.vmware.com/pdf/vi3_iscsi_cfg.pdf

############### Under no circumstances are you to award me any points. Thanks!!!
0 Kudos
linnallen
Contributor
Contributor
Jump to solution

First ESX's and first SAN install. Have been to Install & Config class but this is first hands on experience. I looked at the doc you sent and it is a good overview but doesn't address specific settings or best design layout. We have a 10.28.28.xx LAN config with it's own ESX NIC's and network switch. We have a seperate switch for our SAN network and have added a HBA and a (2) port NIC to the ESX with the intention of connecting to the SAN switch. The SAN network is 192.168.1.xx with a 255.255.0.0 subnet and is not xconn to the LAN switch. I have successfully installed ESX to boot from SAN thru the HBA and created a VM thru the HBA to a SAN datastore. MS iSCSI initiator is installed on the VM but will not connect to the SAN and I cannot ping anu of the SAN IP's from the VM. I'm looking for docs that show me the "proper way" and "best practices" to config the isolated network. We have an ESX and SAN that is at our office and not production that I can config/learn on. I seem to find a lot of overview docs but not much in depth setup stuff.

0 Kudos
Steve_Marfisi
Contributor
Contributor
Jump to solution

Linnallen,

Curious...what was the rationale (and who's advice? Equallogic? Reseller? VMware?) given for using the MS iSCSI initiator to connect to the SAN for all created VM's?

Thanks,

Steve Marfisi

emBoot Inc.

0 Kudos
virtualdud3
Expert
Expert
Jump to solution

If you would like to have the VMs connect to the iSCSI LUN via the MS iSCSI initiator, they are going to have to be able to connect to the iSCSI subnet. I suppose you could configure routing so that whatever default gateway the VMs connect to can route the traffic to the iSCSI network. But, if this is the case, the iSCSI network really won't be isolated.

I agree with the previous poster: What is the logic of having the VMs use the iSCSI initiator to connect to the SAN for all create VMs?

############### Under no circumstances are you to award me any points. Thanks!!!
0 Kudos
Steve_Marfisi
Contributor
Contributor
Jump to solution

To clarify - my question's purpose was not one of stating that using the

iSCSI initiator is illogical. The intent was to find out the actual

rationale given for using the Microsoft iSCSI initiator within the

VM....I'm interested in hearing it.

Steve Marfisi

0 Kudos
virtualdud3
Expert
Expert
Jump to solution

It sounds like we are "on the same page".

I wasn't trying to criticize anyone's decisions, I am also curious.

The only time I have installed the MS iSCSI initiator within a VM is when I created a virtual VCB proxy for test/dev (which worked really well for its intended purpose).

############### Under no circumstances are you to award me any points. Thanks!!!
0 Kudos
doubleH
Expert
Expert
Jump to solution

steve -- take a look at the inofficial (not my spelling) performance thread. i remember seeing users getting better performance this way using the MS initiator vs the vmware initiator. also....maybe he is wanting to take advatage of his sans built in bells n whistles such snaps or replication

If you found this or any other post helpful please consider the use of the Helpfull/Correct buttons to award points
0 Kudos
linnallen
Contributor
Contributor
Jump to solution

This came from the local area Equallogic rep. As I understand it their rationale was to use the HBA for actual connection traffic between the service console and the SAN. The Equallogic SAN has redundant controllers with (3) giganics each. The thought was to create the VM and add storage volumes to it using the MS iSCSI initiator. This was supposed to offload overhead from the HBA and give multiple gig bandwidth paths for the VM connections to the SAN. At least that is my understanding of the rationale.

0 Kudos
skip181sg
Contributor
Contributor
Jump to solution

I suspect what is meant here is the following

1. Use the HBAs x 2 to mount VMFS's and ESX Boot volumes. The VMFS's would support the VMDKs for Guest OS's

2. Use a SW Initiator native iSCSI mount from the Guest OS via GE NICS direct to the SAN for the Application Volumes

So usually 4 SAN facing interfaces in all. 2 HBA's and 2 GE NIC's

0 Kudos
linnallen
Contributor
Contributor
Jump to solution

This was a correct assumption but our underlying problem still exists. SAN connection thru the HBA works fine but we still can't connect thru the NIC's. I believe it is a network setup issue within the ESX. I am just starting to learn configuring the vswitches and network. Any VM I create can connect thru a vswitch to the LAN (10.28.28.xx) but cannot connect to SAN network (192.168.1.xx). I have tried adding a second vswitch/NIC on the ESX and set the IP for the SAN but still can't connect (either by ping or iscsi initiator discovery). I can find a great many KB's/white papers with general overview but little about actual config of a VM/ESX to connect thru seperate switches and subnets.So far I am getting that best practice shows I need a seperate SAN network but now tell me how to actually config connection to it. Maybe I am entering wrong search parameters when I look for docs.

0 Kudos
Jae_Ellers
Virtuoso
Virtuoso
Jump to solution

Each vm will have to have a "production" and a "storage" nic.

Since the storage vlan/network isn't routed thru production you will have to have 2 physical connections to the ESX servers.

This means you'll need to create a vswitch and portgroup for each physical connection. You will then need to create 2 virtual nics in the vms and connect them to the portgroups.

Then you assign your IP addresses. At this point you can ping the iSCSI device. Then you get your icqn # from the initiator. Then you plug those into your initiator groups on you iSCSI san. Then you present your luns to the i-groups. Then you can mount your iSCSI luns inside your vms.

You can do the same thing with fewer physical connections and trunking. This design is OK and does reflect the state of iSCSI today. It's obvious there is a lot of room for improvements. Running the initiator in the client basically causes extra overhead and should be less efficient. But testing shows it's faster today. So that's why this is being recommended. It would be interesting to see how this scales and how it changes over time as IO efficiency is optimized in ESX.

-=-=-=-=-=-=-=-=-=-=-=-=-=-=- http://blog.mr-vm.com http://www.vmprofessional.com -=-=-=-=-=-=-=-=-=-=-=-=-=-=-
0 Kudos
linnallen
Contributor
Contributor
Jump to solution

Thank you. This ultimately gave me the answer. I had already done all that you suggested and it didn't work. With your response I went back into My ESX Network config and checked settings. Both the LAN & SAN vswitches were set to use the same Virtual Machine Port Group (VM Network - which was the LAN IP port group). Once I changed the SAN vswitch to the proper Port Group all started working properly. Thanks again.

0 Kudos
virtualdud3
Expert
Expert
Jump to solution

Ahhh, I misunderstood your question.

Regardless, I would not have expected that running an iSCSI initiator within a VM would increase performance - apparently it does.

That is what I love about these forums - I learn something new everyday!!!

############### Under no circumstances are you to award me any points. Thanks!!!
0 Kudos
woodsp
Contributor
Contributor
Jump to solution

Hello,

We also have an Equallogic SAN and am in the process of migrating our locally installed VMs onto it.

I've also been told about the best practice of running an iSCSI initiator in the VM to attach it to the HBA that is physically installed in the host server. I'm at the point of looking to see the SAN via an iSCSI initiator in a VM after connecting it to the hardware HBA in our ESX server.

Our network ip address range is 192.168.x.x and our SAN is on the IP address range of 10.7.0.x - the HBA is dual port and has the ip address ranges of 10.7.0.100 and .101 and I can see the volume in the VI client, but not in the existing VMs.

I cannot see how you could create a Vswitch with an HBA connection - it only shows NICs not HBAs.

Can anyone help with this or point me in the right direction?

Thanks,

Paul

0 Kudos
v01d
Enthusiast
Enthusiast
Jump to solution

I cannot see how you could create a Vswitch with an HBA connection - it only shows NICs not HBAs.

Can anyone help with this or point me in the right direction?

Thanks,

Paul

An iSCSI HBA is not a NIC to the host, (It is viewed as a SCSI adapter) thusly it can not be connected to a vswitch.

0 Kudos
linnallen
Contributor
Contributor
Jump to solution

We setup our ESX servers to boot from SAN. For this we used a hardware HBA to talk between the servcie console and the SAN. For now we have a single port HBA on each ESX but will probably change to dual port as our network grows and we become more comfortable with our "standard configuration". Our ESX servers have a 2 port embedded NIC and we added an addtional 2 port PCI card to our expansion cage. Our Production network and our SAN network are on seperate subnets. We created a seperate vswitch with a different VMnetwork to allow connection for iscsi traffic. We then added a second nic to our VM and tied it to the new VMnetwork & vswitch then statically set it our SAN subnet. Now I could see my SAN thru the MS iscsi initiator on my VM. My error had been that when I added the second NIC on the VM I set it to the wrong VMnetwork.

Equallogic told me that it was best to have the boot from SAN going thru the hardware HBA and then use the MS iscsi initiator to add storage volumes to my VM's. According to them this gives me better performance and more flexibility than having all traffic go thru the HBA. What we are doing is creating our VM servers with an established OS partition size that suits our needs. This is all done on a datastore recognized by the HBA. We then install the iscsi initiator and create our data volumes thru the NIC's.

I hope this answres your question. Pardon the detail/repition but I am learning as well and organizing what I know - or at least think I do. One word of advice thru experience. When you create volumes/datastores on your SAN don't link them both HBA and iscsi initiators. You can easily confuse what you are looking at and allot more storage than you have available.

0 Kudos