VMware Cloud Community
ardenen
Contributor
Contributor

Migrating ESX from iSCSI to Fiber Channel on EMC CX3-10 SAN

We currently have two ESX 3.5 servers in a cluster connected to a Dell/EMC CX3-10 SAN via iSCSI. This cluster is our virtual server cluster and we will be adding a third server to this cluster in a couple of months. We just purchased three new servers for a second cluster that will be our virtual desktop cluster. All servers have been connected to the SAN sucefully with iSCSI and tested. All servers have fiber channel HBAs. The server cluster is production so I can't really mess around with it without scheduling a downtime - however the virtual desktop cluster is not yet production and I have been doing most of my testing with that cluster.

I can not figure out how to migrate from iSCSI to Fiber Channel. I've been searching the powerlink site for days and apparently I don't know what to look for 'cause I have not found a thing. I was hoping someone with ESX experience could point me in the right direction or give me a few pointers on how exactly you connect an ESX server to a SAN via fiber channel. The ESX servers currently can see a SCSI target path but can not connect to the LUNs on that path. From the SAN side the iSCSI connections show up under the storage group but the fiber channel are in a ~management group and not part of the assigned storage group for the server. If I remove iSCSI it just greys out the connections to the storage group and still lists the fiber as management.

Anyone do something similar to this? I really don't want to move forward with the launch of 150 VDs without fiber channel in place on the new server.

As a side note can one cluster be connected to the same LUNs via fiber channel as the iSCSI cluster? I've sucefully connected two clusters to the same LUNs via iSCSI and was wondering if I could have mixed connectivity to LUNs.

Thanks!

Reply
0 Kudos
7 Replies
kjb007
Immortal
Immortal

Are you connecting directly to the CX3-10 or do you have another SAN switch in between?

Fiber Channel and iSCSI are similar, in that they allow access to a raw LUN. That raw LUN can be shared by multiple hosts, without problems. If you are not seeig the LUN(s), then make sure you can see the WWNN of the ESX host HBA's on your SAN. If you can see the WWNN, have you allowed access to those LUN for those WWNN?

I run almost only Fiber Channel in a switched fiber environment, and I've found it much simpler overall.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
Reply
0 Kudos
ardenen
Contributor
Contributor

With the Fiber channel we have two Brocade FC switches, for iSCSI we have an iSCSI vlan on a stack of Cisco 3750Gs.

I think I solved part of the problem. I removed the host from the storage group and added the host back in and it connected both fiber channel and iSCSI -- which has me a little worried. I know the vm files keep track of who has access but I can't tell how the ESX server prioritizes its paths to the LUN. I was told that you couldn't run both iSCSI and Fiber Channel -- so far - according to the SAN and VMware I have both connected -- no idea if there is information passing accross both.

Also this brings up another question about ESX -- if I detach an ESX server from a LUN it loses all connectivity to the VMs. Will it recover the VMs after beeing reattached?

Reply
0 Kudos
ctfoster
Expert
Expert

Also this brings up another question about ESX -- if I detach an ESX server from a LUN it loses all connectivity to the VMs. Will it recover the VMs after beeing reattached?

It all depends what you mean by 'recover'. If you are running HA and detach the SAN and expect your vms to restart on the other host you might be surprised to find out that this doesn't happen - ...well not unless you are 'lucky' enough to PSOD the host. I'm not sure what happens when the store is once again available. Do the vm's autostart ? - never tried it. Probably not.

If you are refering to 'recovering' the vmdk files - esx simply starts the OS with the vmdk in whatever state the last commit to disc left it in. It's up to the host OS to sort out the mess. There is always the possiblity of some corruption, not necessarily at a physical level, but if you are running some sort of transactional system you always run the risk of a logical damage.

ardenen
Contributor
Contributor

I wasn't expecting to detach the host from the SAN with VMs running on it. There were two ways I envisioned this going -- the most ideal way would be to put one of the two production hosts into maintenance mode, remove the host from the SAN, and reconnect then vmotion the devices to the FC host and put the second host in maintenance and repeat. My concern there is if both Fiber Channel and iSCSI can coexist on the same host? So far on my test cluster this has been the case but I don't know which path is prioritized and I have some concerns about data corruption if one server is accession / vmotions the vms from iSCSI to FC. The second way was to schedule a downtime - bring everything down into maintenance mode -- disconnect both production hosts and reconnect them with FC only. This is where I'm concerned about recovery -- if the data stores are reconnected and the hosts exits maintenance mode will the VMs all be available to power back on or is there an extra step to recover the VMs?

Reply
0 Kudos
kjb007
Immortal
Immortal

Ok, you can't do a regular vmotion with different datastores. If you have the downtime, the safest mode would be to have one server with the fiber channel configured, and the other with iSCSI. Then shutdown your vm, and do a cold migration to the host and the datacenter you want to move to. I'm not sure there is any reason why you can't have both iSCSI and Fiber Channel at the same time. Where did you hear or read that? If you have both of the protocols and LUNs at the same time, then you can also try SVMotion of the the storage while the vm(s) is/are up.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
ardenen
Contributor
Contributor

Same datastores just different paths -- I need to connect the FC path to the LUNs.

Dell / Dell "Gold" Tech support is who we purchased and had the initial install of the system through -- they were the people who said we can not run both and they made the migration seem like a very scary deal with a high potential for data loss. My experience with the "Gold" service is I'd rather listen to you guys... I'll leave it at that.

All right... I think I'm going to have to schedule a downtime then... just to be safe... but it sounds like it shouldn't be a problem reconnecting the servers and vmotioning the VMs.

I'll report my findings after the migration -- thanks for the help.

Reply
0 Kudos
ardenen
Contributor
Contributor

All went well -- no downtime was necesarry. All that needs to be done is put the host into maintenance mode, (if you no longer want to use iSCSI) disable iSCSI connector, remove from storage group, add back to storage group, reboot server, exit maintenance mode. I've sucefully tested hosts using both iSCSI and Fiber Channel and a mixed cluster where one host is using iSCSI and the other is using Fiber Channel. Does not make a difference and there was no data corruption. 100% uptime for my virtual servers. VMware makes an amazing product.

Reply
0 Kudos