VMware Cloud Community
ericGT50
Contributor
Contributor

SRM 4.1.1 still does not work properly with Linux

Reporting from ticket: 11074400806

We run over 100 CentOS 5.5 linux servers that are web, process, data integration type of servers.

They unfortunately have to have two virtual nics.  The primary eth0 NIC is a LAN nic that resolves local access to the designated VLAN that it resides on if it is prod, dev, QA, load, or training.  The secondary nic eth1 NIC is a SAN nic that has to mount NFS datastores off of an EMC Celerra NS480.  Unfortunately that NIC has to be a VMXNET3 driver so we get proper utilzation of 9000 MTU settings to NFS data stores hosted on the EMC 10 GB FCoE network segment.

The problems:

The DR IP customizer utility spreadsheet does not work properly.  What it does on the VM's is to recreate the eth0 and eth1 - however in doing so it swaps the LAN and SAN nics.  It also does not allow any carry over of MTU settings.

For the default field of the network adapaters it does not change our DNS from 10.10.100.69. 10.10.100.70 to the remote DR VLAN which is 10.11.100.69 and 10.11.100.70.

I have also tried to use the custom optimization template on the recovery side.  That gets closer - with the IP's changing, and at least the DNS entries are there.  However - the host loses the default gateway setting, and the NICS still swap on being LAN / SAN.  There is also no way to set MTU for a SAN based nic in the customization wizard.

I have waited patiently through multiple tickets, and revisions of VMware's product updates for this to be fixed, and it still appears that even in version 4.1.1 the Linux fail over capability is still not 100 % reliable like the Windows.

Reply
0 Kudos
3 Replies
ar1es
Enthusiast
Enthusiast

Hello Eric,

Why does the linux systems access your SAN using a separate NIC? You could just create an RDM or a separate virtual disk that points to where the SAN is. If you're using iSCSI, the you would need a vlan for the vmkernel, if Fabric then it would be handled by the HBA on the ESX host.

Not to mention the fact that from the looks of things there isn't any redundancy in place for your individual NICs. Further more, it presents an administration nightmare.

This would remove the dependancy for an additional nic.

-rp

Reply
0 Kudos
ericGT50
Contributor
Contributor

The Data stores for ESX / VM's are NFS data stores - we are using 10 GB ethernet FCoE - not block protocols like iscsi or fiber.  RDM's are not supported as proxy mappings for NFS.  Also the LUN that the web servers point to for our client data is also being shared out by the EMC as an NFS mount point.

Reply
0 Kudos
mal_michael
Commander
Commander

Hi,

I have no experience in customizing Linux VMs, but with Windows VMs it does work as expected most of the time.

The DR IP customizer utility spreadsheet does not work properly.  What  it does on the VM's is to recreate the eth0 and eth1 - however in doing  so it swaps the LAN and SAN nics.

Does this happen on all of your VMs?

Do you have the correct correlation between Network Adapter ID (PCI slot) and adapter ID in csv file? Is this consistent across all of your VMs?

You can also try to put MAC address in csv file to ensure that correct adapter is being reconfigured.

It also does not allow any carry over of MTU settings.

Not sure about that. Is this setting being set to default after the customization?

For the default field of the network adapaters it does not change our  DNS from 10.10.100.69. 10.10.100.70 to the remote DR VLAN which is  10.11.100.69 and 10.11.100.70.

Does this happen on al of VMs? If you put the DNS in each adapter's row, does it work?

Michael.

Reply
0 Kudos