VMware Cloud Community
davehope
Contributor
Contributor
Jump to solution

ESXi 5.5 Local RDM's greater than 2Tb

I recently added a set of 3TB WD Red SATA drives to an ESXi 5.5u2 host and configured them as physical compatibility raw maps attached to a Windows Server 2012 guest. These disks appeared as having a capacity of 512B and unallocated space on 16EB:

LO4doeH.png

The raw maps were created using vmkfstools as follows:

vmkfstools -z /vmfs/devices/disks/vml.0100000000202020202057442d574d43315430333739323233574443205744 WD_RED_1.vmdk

vmkfstools -z /vmfs/devices/disks/vml.0100000000202020202057442d574d43315430343235393733574443205744 WD_RED_2.vmdk

vmkfstools -z /vmfs/devices/disks/vml.0100000000202020202057442d574343344e484b3637433346574443205744 WD_RED_3.vmdk

If I take the disks out, connect them to a physical host then I am able to put a GPT on the disks and add a volume on them using Windows Storage Spaces. Once more attached to the ESXi host they work just fine, albeit the disks show with the same capacity/free space.

Looking around I can see others have encountered this problem, without any resolution:

https://communities.vmware.com/thread/468799

https://communities.vmware.com/message/2329909

https://communities.vmware.com/thread/466442

The disks emulate a 512B sector size and I was wondering if this is likely to be part of my problem? I was wondering if anyone else had encountered this problem, or been able to use directly attached raw device maps with disks greater than 2Tb?

1 Solution

Accepted Solutions
davehope
Contributor
Contributor
Jump to solution

This issue was resolved in upgrading to ESXi6.

The disks were already in GPT format and unfortunately hardware version 10 didn't help

View solution in original post

0 Kudos
14 Replies
sludgeheaddf
Contributor
Contributor
Jump to solution

The resolution is to upgrade VM hardware version to 10 (ESXi 5.5), add a SATA controller (limited to HW v10), add the raw device map to the SATA controller rather than the default LSI SAS.

No_Way
Enthusiast
Enthusiast
Jump to solution

Hi

You should know that more then 2Tb needs to be initialized as GPT, not MBR

Also take a look at http://kb.vmware.com/kb/2058287

Hope this helps.

NW

0 Kudos
davehope
Contributor
Contributor
Jump to solution

This issue was resolved in upgrading to ESXi6.

The disks were already in GPT format and unfortunately hardware version 10 didn't help

0 Kudos
No_Way
Enthusiast
Enthusiast
Jump to solution

Hi,

Good that you have resolved. But v5.5 was not issue. We have some VMs with RDM disks more then 2Tb and working properly.

NW

0 Kudos
davehope
Contributor
Contributor
Jump to solution

This was tested extensively in a lab environment, with no changes to the VM upgrading the host to ESXi6 was certainly the fix.

Did the disks you used emulate a 512B sector size or were they 4K native?

0 Kudos
sludgeheaddf
Contributor
Contributor
Jump to solution

I'm aware they need to be GPT. It did not matter in this scenario, the disks were not read properly when connected to LSI SAS on ESXi 5.5 u2. This is not the case with every manufacturer, as some Western Digital drives had no issues, but some HGST drives did.

0 Kudos
sludgeheaddf
Contributor
Contributor
Jump to solution

While upgrading to 6 may be one fix, this was a solution as well that someone else may find beneficial in the event they cannot yet upgrade to 6.

0 Kudos
sludgeheaddf
Contributor
Contributor
Jump to solution

If you had added the SATA controller to the machine rather than connecting them to LSI SAS, you may have had a different outcome.

0 Kudos
sludgeheaddf
Contributor
Contributor
Jump to solution

My post did not indicate that 5.5 is the issue. I have other 4TB drives connected under 5.5 that work fine connected to LSI SAS. These in particular required connection to SATA on HW v10. All of this is under ESXi 5.5 u2.

0 Kudos
davehope
Contributor
Contributor
Jump to solution

It's great you found a solution to your issue - My comment was that that didn't work for me with the controllers I had and the disks.

From what I can make out, my issue was caused by the disks being 512e rather than native 4k.

Apologies if my comment caused offence.

0 Kudos
No_Way
Enthusiast
Enthusiast
Jump to solution

Agree, your are right and no Smiley Happy

In VMware KB have the information regarding the controller.

Before I reply I made a test, just create in my Equalogic a 3TB Volume(with 512k). Add the volume to my host(iSCSI) and then add the volume directly to on Windows 2008 R2 VM, added as mapped RAW disk and finish. My VM is using LSI Logic SAS.

Enter the Windows Server, initialize the disk with GPT and formated and was added to System Guest without any issues with free space 2.99TB

This test was made with vCenter 5.5, host ESXi 5.0 and VM HW 8.0. So no issues where.

NW

0 Kudos
No_Way
Enthusiast
Enthusiast
Jump to solution

Answer you question, no all are in 512. We don't have 4k Volumes.

NW

0 Kudos
zitrodotnet
Contributor
Contributor
Jump to solution

NW, you are most certainly correct.  The key difference in davehope's situation (and mine as well) is that the RDMs point to locally attached SATA disks > 2TB.  Officially unsupported but works well with local SATA disks <= 2TB disks attached to SCSI controller in HW version <= 9.


Joe

0 Kudos
zitrodotnet
Contributor
Contributor
Jump to solution

sludgeheaddf,

How did you go about creating the SATA controller in ESXi 5.5?  Were you able to create it without relying on the Web Client (e.g. manually edit the vmx file?)

0 Kudos