VMware Cloud Community
MartinHorton
Contributor
Contributor

ESXi 5.5 and Synology NFS datastores

I am posting this both here and on the Synology forum because I am not sure where the error lies

All that follows relates to my home lab. I am experimenting here prior to installing a VMWare based system using Synology for data.

I have a DS1815+ running DSM 6.0.2-8451 Update 6. I am interfacing to a single VMWare box using ESXi 5.5.0 2403361. The VMWare system contains disks and there are two datastores.

I started with 4 x 1TB drives as RAID5 accessed as a iSCSI Target. On the VMWare box I have a Windows Server 2012 Server. It is on  a datastore using internal hard drives and I added Windows iSCSI to access the iSCSI RAID target. This became the F Drive. It all worked fine. This was done mainly so I could validate that I could set up and use iSCSI Targets.

Then I did more research and realized that while there might be cases in which using the iSCSI from within a host to access a Synology iSCSI target might make sense, in general it made more sense to create iSCSI targets on the synology for use as VMWare datastores. So I added a 4TB HD to the Synology and created an iSCSI target to access it. I then created on VMWare a new datastore using the iSCSI target. Then I added a new Hard Drive to the Windows 2012 Server VM and put the VHD on the new datastore. I made the HD 3.5TB (almost all the disk). This became the G Drive. Having done that I copied the F Drive to the G Drive and changed the shares from the F Drive to the G Drive and the entire system continues to work as expected.

So then I did some more research and discovered that many articles suggested that NFS datastores were faster than iSCSI datastores. This seemed counter intuitive to me, but I thought I would at least try it.

So I added 2 more 1TB drives to the Synology and created a mirrored NFS share of approximately 1 TB. Then I tried to access it from the VMWare but no matter what I did it couldn't be mounted even though I had followed all the procedures accurately. Then I cam across a video on YouTube  that showed that the /etc/exports file had to be modified to remove _lock from on one the the parameters. Once I did that I was able to add the datastore on VMWare.

That is when the trouble started. Without any notice and for no apparent reason the miorrored NFS dtatstore just disappeared from the VMWare. At the same time the datastore based on the iSCSI Target also disappeared even though Synology still showed both as intact. When I tried to access the iSCSI target again as a datastore it took forever but eventually said that the disk was empty. Windows explorer on the server VM hung when trying top assess the G disk, unsurprisingly.

Undeterred I recreated the 4TB HD as an NFS volume and tried creating that as a datastore using NFS. Even though it gets created, I am unable to create any VHD on that datastore and every time I do, the datastore disappears.

Furthermore, since the iSCSI adapter was in the VMWare was no longer being used there was no way to remove it. And when I disabled it, it demanded that the VMWare be rebooted after which it was gone. Is this normal to demand a reboot to get rid of an unused iSCSI adapter?

Am I missing something very simple about creating datastores from NFS? Is NFS really better performing than iSCSI and if so is the difference dramatic?

Any help appreciated.

0 Kudos
0 Replies