VMware Cloud Community
nextech
Enthusiast
Enthusiast
Jump to solution

vCenter Server 5.5 Getting Started (with 4TB VMFS datastores)

We have 9 physical server nodes/hosts and we're preparing to setup a VMware vCenter test lab (for learning/practicing with using vCenter Server) for performance evaluation, prior to making a decision on whether to use vSphere for our client's hardware production use.

We've read through vSphere Installation and Setup manual, but we are facing several problems.

We have eight diskless nodes (that are each booting ESXi 5.5 off of a 8GB USB stick)

We have one main server node (that is booting ESXi 5.5 off a 16GB USB stick)

- The main server node has four 4TB hard drives installed in it (all are blank/empty/unformatted)

This is what we've done so far:

1) Installed ESXi 5.5 on all 9 nodes (on each USB stick)

This is what we still need to do:

1) Create four 4TB VMFS datastores (or create a 16TB vSAN using four 4TB datastores?)

2) Install vCenter Server Appliance VM (with vCenter Single Sign-On, vSphere Web Client, vSphere Inventory Service and vCenter Server) on the newly created datastore.

The problem we are having now, is we are trying to create a 4TB VMFS datastore on each of the four 4TB hard drives, or possibly setup a vSAN using the four 4TB drives (into one large 16TB cluster?).

We want to install vCenter Server Appliance on the 4TB VMFS datastore (or vSAN), and use the vCenter Server Appliance to manage/configure each of the 9 ESXi 5.5 server nodes.

After reading the instructions, we seem to be extremely confused, because the instructions seem to create a "chicken and the egg" problem.  What comes first, the chicken or the egg?  The instructions say to install the vCenter Server Appliance on a datastore.  We can't create a 4TB datastore using the C# vSphere client, so how do we create a 4TB VMFS datastore?

The instructions are very unclear, and we can't figure out how (step-by-step) to create a single 4TB VMFS datastore (using the vSphere Client), and how do we install a vCenter Server Appliance onto a 4TB VMFS datastore without having a vCenter Server Appliance (with Web Client) installed to create the 4TB VMFS datastore?

I really wish that the VMware vCenter Client (C# based client) was updated to support vSphere 5.5, and would allow you to create 4TB VMFS datastores, and allow you to install the vCenter Appliance onto a 4TB VMFS datastore from the C# vSphere Client.

Any ideas as to how we can format our first 4TB VMFS datastore using the vSphere Client?  We do not have any datastores currently created, and although we have four 4TB hard drives inside the machine, we are not sure how to even create a 4TB VMFS datastore on the 4TB drives, since the VMware vSphere Client doesn't seem to support vSphere 5.5 features (including 4TB datastores), and there is no way to install the Web Client without having the vCenter Server Appliance (with Web Client) running?  Correct?

How do we get a 4TB VMFS datastore created if the vSphere Client doesn't support creating 4TB VMFS datastores?  Can you install a vSphere Web Client locally, on a local machine?  Is that the only way to get a 4TB VMFS datastore created?  

Thank-you for all your help,

1 Solution

Accepted Solutions
nextech
Enthusiast
Enthusiast
Jump to solution

I figured out the solution to the problem.

There seems to be a bug (or several bugs?) in ESXi 5.5.0 and/or the vSphere 5.5.0b Client when trying to create a 4TB datastore using the vSphere Client.

Whenever I would try to create a VMFS-5 Datastore on a 4TB disk, it would come back with the following error:

Screenshot 2014-01-04 21.01.58.png

A Specified parameter was not correct.

Vim.Host.DiskPartitionInfo.Spec

Call "HostStorageSystem.ComputeDiskPartitionInfo" for object "storageSystem" on ESXi "x.x.x.x" failed.

I found the following article that describes this ESXi 5.5 Installation Error here: https://communities.vmware.com/message/2294919#2294919

and here: https://communities.vmware.com/thread/457982

It seems that the problem is caused by any disk that has existing partitions on it.

It seems that the problem is that ESXi 5.5.0 is not properly wiping the disk (unable to delete the partitions).

More information can be found here:  http://blog.the-it-blog.co.uk/2013/10/10/cant-add-a-storage-device-to-vmware-esxi-5-5/

and here:  http://www.pkdavies.co.uk/163-call-hoststoragesystem-computediskpartitioninfo-for-object-storagesyst...

If you manually delete all the partitions (logged in as root), then try to go back and create a VMFS-5 datastore, it should work and fix the problem.

The EASIEST solution to fixing the problem for me was to simply use the vSphere client and create the datastore using VMFS-3 first (which I did) and after a VMFS-3 datastore is created, just go back and delete the new VMFS-3 datastore and then go back and create a new VMFS-5 datastore and it should be able to wipe the disk and create the VMFS-5 datastore and it should work (which it did for me).

I'm not exactly sure what the problem/bug with ESXi5.5.0 (or vSphere Client 5.5.0b) is, or why ESXi 5.5.0 is unable to delete existing partitions, but by creating a 2TB VMFS-3 datastore on the new 4TB drive first (be sure to choose "Use all available partitions" under current disk layout), and then after that 2TB VMFS-3 datastore is created, then simply go back and delete that new 2TB datastore, and then after deleting that 2TB datastore and attempting to create a new 4TB (3.64TB) datastore using VMFS-5 seemed to fix all the problem(s), and I was able to create the 3.64TB VMFS-5 datastore.

I had to do this for each and every 4TB drive that was installed in the system.  It seems to be a strange bug with ESXi 5.5.0 being unable to delete existing drive partition information.

It seems that there is a bug in ESXi 5.5.0 with the VMFS-5 datastore installer being unable to delete the existing partitions on a drive.  Other users seem to have experienced the problem here, and here, and here, and here and here and here.  It seems to be a problem with ESXi 5.5.0 being unable to delete the existing partition data on the drive.

It seems to be a reproduceable error/bug and I can reproduce this error/bug with every single 4TB drive that I add to the system.

Hopefully this solution will help future users who might have the same exact problem, or stumble upon this post via Google Search.

View solution in original post

8 Replies
a_p_
Leadership
Leadership
Jump to solution

VMFS-5 datastore with sizes of up to ~64 TB are supported since vSphere 5.0. However, you may need to check whether LUNs/disks >2TB are supported with the controller you use. And yes - if it is supported - you can create the datastore using the vSphere Client.

Another thing, for vSAN, you need a minimum of 3 hosts with HDDs as well as SSDs!

André

nextech
Enthusiast
Enthusiast
Jump to solution

The specs on the server (PowerEdge T110) that we installed the 4TB hard drives into:  http://www.dell.com/downloads/emea/products/t110_spec_sheet.pdf


Are there any instructions/tutorials on how this 4TB datastore can be created locally (from the command line) on a local ESXi 5.5 server?

This is all I could find here:  https://communities.vmware.com/message/2302383

and here:  VMware KB: Support for virtual machine disks larger than 2 TB in vSphere 5.5

These are the ONLY documents that I could find concerning 4TB datastores, but neither seems to address the issue of HOW TO CREATE A NEW 4TB VMFS DATASTORE.

Is this something that can be done from the command line on the local server?

I would think that the server supports hard drives larger than 2TB, we just updated the BIOS.  I might need to look into this further, just to make sure that the server does recognize the drives as 4TB drives.

Specs can be found here:  http://www.dell.com/downloads/global/products/pedge/t110_spec_pt.pdf

and here:  http://www.dell.com/downloads/global/products/pedge/en/T110SpecSheet.pdf

But as long as the server recognizes the 4TB drives (as 4TB?), then ESXi 5.5 should be able to view them as 4TB drives, and should be able to format them as a 4TB VMFS datastore, correct?  This can be done with the vSphere Client (C# based), correct?

Thank-you,

Reply
0 Kudos
nextech
Enthusiast
Enthusiast
Jump to solution

I just checked Hitachi's website, and it shows the PowerEdge T110 as working with the 4TB hard drives that we purchased.

See here:  http://www.hgst.com/tech/techlib.nsf/techdocs/C17F42782AA50DCC882579D7007C4B29/$file/US7K4000CompatG...

Is there a specific problem with VMware ESXi's hardware/SATA drivers for the PowerEdge T110 that is not properly recognizing the drives as 4TB drives?

Reply
0 Kudos
nextech
Enthusiast
Enthusiast
Jump to solution

VMware seems to recognize the drives as 3.6TB (4TB) drives.

If we try to create a datastore, it shows two 3.6TB (4TB) hard drives.

See here:  Screenshot 2014-01-04 17.55.51.png

But when we try to actually create the datastore, it shows the capacity as 3.6TB (4TB) but says that the maximum formatting/size is 2TB see here:

Screenshot 2014-01-04 18.05.18.png

So clearly the Server IS recognizing that the hard drives are 4TB drives (3.6TB).  VMWare seems to recognize the drives as 4TB (3.6TB) drives, and even shows that the maximum capacity of the drives is 3.6TB (4TB).  But why can't we create ONE large 4TB (3.6TB) datastore on each drive using VMFS-5?

We can try to create the 4TB (3.6TB) DataStores, but for some reason the vSphere Client puts a limitation of 2TB as the maximum file size when you try to format the drives as VMFS-5.

Is there any way to get around this 2TB maximum partition/file size?  Can't we just create one large 4TB partition, and format it as one large 4TB partition (using VMFS-5)?

Reply
0 Kudos
nextech
Enthusiast
Enthusiast
Jump to solution

If we select the 4TB device/drive and then select "Maximum available space" (under Formatting) which shows the current partition format as "MBR", and we select "Use Free Space" (which currently shows two partitions, one is 465.75GB and the second partition is "free space" and shows 3.18TB of free space).

If we select "Use 'Free space' " then click the Next button, and when we get to the "Formatting" (Capacity) screen, it shows the maximum available space as 1582.25GB (exactly HALF of what the actual free space is).

See here: Screenshot 2014-01-04 18.21.48.png

See here: Screenshot 2014-01-04 18.21.59.png

See here: Screenshot 2014-01-04 18.22.08.png

For some reason it always shows the maximum available space (for Disk/LUN formatting) as HALF of what the physical space available on the hard drive is.

Is this a VMWare vSphere Client 5.5.0b bug?  Or an ESXi 5.5.0 bug?

If we go back and select "Use all available partitions" (instead of "use 'Free space'), you would think that we could now use all 3.6TB (4TB) of available disk space.

Instead, it changes the partition format (from MBR to GPT) and now the Capacity is 3.64TB (4TB) but when you select "Use all available partitions" then click "next" when it gets to the "Disk/LUN - Formatting" page it says that the maximum available space is 2048GB, instead of 1582.25GB (which is far less than the 3.64TB of capacity).  Is the vSphere 5.5.0b Client broken?

See here: Screenshot 2014-01-04 18.34.38.png

See here: Screenshot 2014-01-04 18.34.28.png

See here:  Screenshot 2014-01-04 18.35.42.png

The 4TB hard drives are being recognized as 4TB (3.6TB) hard drives, but the vSphere Client is not allowing us to create a 4TB (3.6TB) datastore using VMFS-5.

We are using vSphere Client v5.5.0b (Build 1474107) and ESXi 5.5.0 (Build 1331820).

Any ideas as to how to fix this and create/format a 4TB (3.6TB) datastore using the vSphere Client?

nextech
Enthusiast
Enthusiast
Jump to solution

VMware vSphere Client sees the hard disk capacity as 3.64TB (4TB), but only allows us to create a MAXIMUM primary partition size of 2.00TB.

Any ideas as to how to fix this?  I thought you could create 4TB partitions (VMFS-5 datastores) using the vSphere Client?

Is this a bug?  Why are we being limited to only a 2.00TB maximum file size for formatting the VMFS-5 disk partition?

Is this a bug in the vSphere 5.5.0b Client?  Is there any way to manually format the partition as 3.64TB from the command line?

nextech
Enthusiast
Enthusiast
Jump to solution

I figured out the solution to the problem.

There seems to be a bug (or several bugs?) in ESXi 5.5.0 and/or the vSphere 5.5.0b Client when trying to create a 4TB datastore using the vSphere Client.

Whenever I would try to create a VMFS-5 Datastore on a 4TB disk, it would come back with the following error:

Screenshot 2014-01-04 21.01.58.png

A Specified parameter was not correct.

Vim.Host.DiskPartitionInfo.Spec

Call "HostStorageSystem.ComputeDiskPartitionInfo" for object "storageSystem" on ESXi "x.x.x.x" failed.

I found the following article that describes this ESXi 5.5 Installation Error here: https://communities.vmware.com/message/2294919#2294919

and here: https://communities.vmware.com/thread/457982

It seems that the problem is caused by any disk that has existing partitions on it.

It seems that the problem is that ESXi 5.5.0 is not properly wiping the disk (unable to delete the partitions).

More information can be found here:  http://blog.the-it-blog.co.uk/2013/10/10/cant-add-a-storage-device-to-vmware-esxi-5-5/

and here:  http://www.pkdavies.co.uk/163-call-hoststoragesystem-computediskpartitioninfo-for-object-storagesyst...

If you manually delete all the partitions (logged in as root), then try to go back and create a VMFS-5 datastore, it should work and fix the problem.

The EASIEST solution to fixing the problem for me was to simply use the vSphere client and create the datastore using VMFS-3 first (which I did) and after a VMFS-3 datastore is created, just go back and delete the new VMFS-3 datastore and then go back and create a new VMFS-5 datastore and it should be able to wipe the disk and create the VMFS-5 datastore and it should work (which it did for me).

I'm not exactly sure what the problem/bug with ESXi5.5.0 (or vSphere Client 5.5.0b) is, or why ESXi 5.5.0 is unable to delete existing partitions, but by creating a 2TB VMFS-3 datastore on the new 4TB drive first (be sure to choose "Use all available partitions" under current disk layout), and then after that 2TB VMFS-3 datastore is created, then simply go back and delete that new 2TB datastore, and then after deleting that 2TB datastore and attempting to create a new 4TB (3.64TB) datastore using VMFS-5 seemed to fix all the problem(s), and I was able to create the 3.64TB VMFS-5 datastore.

I had to do this for each and every 4TB drive that was installed in the system.  It seems to be a strange bug with ESXi 5.5.0 being unable to delete existing drive partition information.

It seems that there is a bug in ESXi 5.5.0 with the VMFS-5 datastore installer being unable to delete the existing partitions on a drive.  Other users seem to have experienced the problem here, and here, and here, and here and here and here.  It seems to be a problem with ESXi 5.5.0 being unable to delete the existing partition data on the drive.

It seems to be a reproduceable error/bug and I can reproduce this error/bug with every single 4TB drive that I add to the system.

Hopefully this solution will help future users who might have the same exact problem, or stumble upon this post via Google Search.

a_p_
Leadership
Leadership
Jump to solution

It's indeed interesting that creating a VMFS-3 partition wipes the previously existing partition. Thanks for sharing this.

Anyway, some words for clarification about the different sizes:

  • The 4TB HDD size is a marketing size, the real technical size of the HDD is 3.64TB (4,000,000,000,000 / 1024^4). The marketing guys calculate with 1,000 (base10) rather than 1,024 (base2).
  • The maximum 2TB file size only shows up in the Windows based vSphere Client. As of ESXi 5.5 the maximum size of a single file (e.g. a virtual disk) on a datastore can be up to 62TB, but this is only supported using the vSphere Web Client.

André