Well, I'm getting nowhere with using the QNAP TS-259 Pro+ as a datastore. I can't for the life of me get it set up. It shouldn't be this hard. I used this qnap doc as a guide to getting the qnap set up - http://files.qnap.com/news/pressresource/product/How_to_set_up_QNAP_NAS_as_a_datastore_via_iSCSI_for.... I'm using Broadcom BCM5709 iSCSI adapters in my host. I used the ESX/ESXi Configuration Guide to configure them. Everything regarding my iSCSI adapter config seems to be good. From the ESXi side and the NAS side, both see each other as connected. I go to add my datastore, it sees the NAS, I select it and go through the Add datastore wizard, but on the first step, sometimes I get this error -
Call "HostDatastoreSystem.QueryVmfsDatastoreCreateOptions" for object "ha-datastoresystem" on ESXi "x.x.x.x" failed.
Other times I am able to get further into the Add Storage wizard, to the point where it is ready to format it with VMFS, and it times out with this error -
Call "HostDatastoreSystem.CreateVmfsDatastore" for object "ha-datastoresystem" on ESXi "x.x.x.x" failed.
Operation failed, diagnostics report: Unable to get FS Attrs for /vmfs/volumes/4d358e43-3d8ab100-c4eb-0015c5e99f8d
I have rebuilt the ESXi host, reset the factory defaults on the NAS as well, and started everything over. Yet I still get these errors. I have googled, searched the vmware forums and qnap forums, and I tried pretty much everthing I have found related to these errors, most of which didn't work - as in I wasn't able to successfully complete the fixes. I believe LUN 0 is the lun number. On the ESXi host, I've tried to do fdisk -u 0, and it comes up saying it can't find 0.
But why would I be using the ESXi software iSCSI adapters??? Isn't that only used for standard nics? Even the the VMware documentation says the Broadcom 5709 card is a dependent iSCSI Initiator - I specifically bought these cards for this reason. The documentation goes through all the steps to set these up as such. I'm not trying to argue, I'm just looking for help/explainations. I thought I was doing everything according to the documentation.
So when using iSCSI hardware (dependent or independent), you always use the software iSCSI adapters??? This doesn't seem right to me.
Well, after more googling I found some conversations about how in some cases it's possible that the iSCSI driver for the BCM5709 may not be compatible with certain iSCSI targets. So who knows, maybe this is the problem.
I've also read that in the case of the BCM5709 because it's doesn't support jumbo frames when used as a dependent iSCSI initiator, if one can afford a little CPU, one should just use the BCM5709 with the software iSCSI with jumbo frames instead. While this will use a little bit of CPU, it will out perform the dependent iSCSI initiator.
So, I guess I will go back and learn how to use the BCM5709 with the software iSCSI and jumbo frames. Fun Fun!!!
Any comments are of course welcome - I need all the help I can get! :smileylaugh:
OK, forget about the above post. I had one last idea. What if the problem lies between the host and storage. I had from the host a cat6 cable running to a switch, then from the switch, a cat5e cable running to the storage. Sounds fairly simple in my opinion. Well, I removed all of these and took a cat6 crossover cable, connected it from the host, directly to the storage. Did a rescan, adding the storage worked immediately. So, long story short, I did have the BCM5709 set up correctly as dependent iSCSI adapters, without having to implement software iSCSI. So now I have to determine if it is a setting in the switch (NetGear smart switch with default config) or the cat5e cable. I'll swap the cat5e cable first since that only takes seconds. If that's not it, I'll have to dive into that switch, which I had planned to do anyways. So happy I've at least isolated the problem.
I got something similar 2 weeks ago while adding storage from SAN MSA1000 to ESXi.
The VClient show “the partition is blank”, and I got error message: Failed to get partition details.
What i did to fix was ssh to host.
type fdisk /vmfs/devices/disks/lun_identity_number
(lun_identity_number must be 4d358e43-3d8ab100-c4eb-0015c5e99f8d in your case)
then followed the fdisk help to create a primary partition.
After that, I added storeage successfully without any error.
So, the thing is the lun has no partition on it then ESXi fails to get details.
Hope this help
If you read my above post, you'll see I isolated the issue as being a problem within my storage network - either a bad cable or switch configuration that needs to be changed. I thought perhaps it was an issue with the disk formatting or file system as you're mentioning, but I wasn't even able to see the lun entirely from my host to perform any fdisk functions. I'm good to go now on this problem. Once I directly connect my host to the storage with a cat6 crossover cable, I was able to add storage immediately without issue. So I think I need to focus on my switch config - perhaps the iSCSI port number is blocked. Shouldn't be a problem to fix that.