I have deleted the initial datastore through the vSphere client. When I go to add a datastore I get the following error " Either the selected disk already has a VMFS datastore or the host cannot perform a partition table conversion. Select another disk." The disk in question is a raid5 array. On the Select Disk / LUN screen it does show up as a 838.10GB drive. The command esxcfg-volume --list shows nothing. Does anybody know why I cannot re-create a datastore?
if this is a local datastore, you'll will probably have to rebuild your ESXi Host
What I'm trying to accomplish is to recreate a datastore with a larger blocksize. Looks like I cannot delete it once it's created from the installation?
You might try to use the utility they mention in the KB article to see if there is a partition listed on the array that you can manually delete.
The hypervisor and the datastore are installed onto the same array. I'm assuming that the other partitions it's warns about is the hypervisor's partitions (diagnostic) and such.
I think this can be avoided by using a dedicated disk for the hypervisor next time.
With a little more research I have come up with the answer;
Blocksize is a non-existent issue with version 5.
Still - this is a bug of the first order - deleting datastores is a standard first step for a 4.x ESXi install.
Can someone from VMware confirm this will be fixed in 5.0.1? Reinstalling isn't an acceptible answer.
Three hours worth of work to find out the hard way this limitation regarding this KB article http://kb.vmware.com/kb/2000454
My recommendation stay with 4.1 or this issue really will waste every ones time. Including other issues I have experienced, for the moment 5.0 has been a large let down.
I have same problem, and this KB: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=200045...
is not the solution.
I have installed ESXi 5.0 on a test machine and I don't want use all HD available space for datastore. I've deleted the original datastore of 300GB for create other of about 100GB. Then, this problem has appeared...
I hope that this behaviour will be fixed soon in a new release. :-(((