VMware Cloud Community
td3201
Contributor
Contributor
Jump to solution

vmfs vs mapped raw lun performance

The following are two tests I ran inside a VM. One is to the OS drive which is inside a regular VMFS datastore, the other is to a mapped raw lun. Both are on the same backend storage (Equallogic iSCSI) connected via fixed MPIO 1 Gb/s. Should I be seeing that kind of difference and is 37 MB/s OK?

# dd if=/dev/zero bs=9000 count=100000 of=test && rm -f test

100000+0 records in

100000+0 records out

900000000 bytes (900 MB) copied, 8.83205 seconds, 102 MB/s

# dd if=/dev/zero bs=9000 count=100000 of=test && rm -f test

100000+0 records in

100000+0 records out

900000000 bytes (900 MB) copied, 23.9424 seconds, 37.6 MB/s

Reply
0 Kudos
1 Solution

Accepted Solutions
AntonVZhbankov
Immortal
Immortal
Jump to solution

>I can't imagine that is resulting in the large difference between RDM and VMFS

Difference between RDM & VMFS is negligible. The only significant vmdk performance minus - zeroing disks by default (due to security concerns).


---

MCITP: SA+VA, VCP 3/4, VMware vExpert

http://blog.vadmin.ru

EMCCAe, HPE ASE, MCITP: SA+VA, VCP 3/4/5, VMware vExpert XO (14 stars)
VMUG Russia Leader
http://t.me/beerpanda

View solution in original post

Reply
0 Kudos
21 Replies
mcowger
Immortal
Immortal
Jump to solution

Which is which (either way you shouldn't see this level of difference)? Also, why such a strange block size?

--Matt

VCP, VCDX #52, Unix Geek, Storage Nerd

9773_9773.gif

--Matt VCDX #52 blog.cowger.us
Reply
0 Kudos
td3201
Contributor
Contributor
Jump to solution

Haha, if you're asking my logic must be broken and uninformed (as usual). The 9000 was an attempt to line up my writes with the underlying jumbo frame configuration of 9000 bytes.

That potential embarrassment aside:

RDM = 102 MB/s

VMFS = 37.6 MB/s

Reply
0 Kudos
td3201
Contributor
Contributor
Jump to solution

After you questioned this, it got me thinking differently. The underlying ethernet frame is irrelevant in this context because of the filesystem? 4k block size on GFS2 (standard) would have been a better choice, no? Probably irrelvant for such a small write but interesting conversation nonetheless.

Reply
0 Kudos
AntonVZhbankov
Immortal
Immortal
Jump to solution

>One is to the OS drive which is inside a regular VMFS datastore

Is this vmdk created with default options? Test performance for vmdk created as "fault tolerance compatible" (aka eagerzeroedthick).


---

MCITP: SA+VA, VCP 3/4, VMware vExpert

http://blog.vadmin.ru

EMCCAe, HPE ASE, MCITP: SA+VA, VCP 3/4/5, VMware vExpert XO (14 stars)
VMUG Russia Leader
http://t.me/beerpanda
td3201
Contributor
Contributor
Jump to solution

It was created as thin provisioned. Perhaps I should test withone that was not just for fun?

Reply
0 Kudos
AntonVZhbankov
Immortal
Immortal
Jump to solution

When vmdk disk is thin provisioned or created by default (as zeroedthick) each block is zeroed on first access. So actually there is "full block write zeroes" + your benchmark write.

If you want to test real performance then you should create vmdk as eagerzeroedthick (vmdk is zeroed on creation, you can create it as "fault tolerance compatible") or thick (vmdk is not zeroed at all).


---

MCITP: SA+VA, VCP 3/4, VMware vExpert

http://blog.vadmin.ru

EMCCAe, HPE ASE, MCITP: SA+VA, VCP 3/4/5, VMware vExpert XO (14 stars)
VMUG Russia Leader
http://t.me/beerpanda
td3201
Contributor
Contributor
Jump to solution

Understood. I can't imagine that is resulting in the large difference between RDM and VMFS which is my largest concern. In the mean time, I will find a mature/thick provisioned VM to run a quick test on.

Reply
0 Kudos
AntonVZhbankov
Immortal
Immortal
Jump to solution

>I can't imagine that is resulting in the large difference between RDM and VMFS

Difference between RDM & VMFS is negligible. The only significant vmdk performance minus - zeroing disks by default (due to security concerns).


---

MCITP: SA+VA, VCP 3/4, VMware vExpert

http://blog.vadmin.ru

EMCCAe, HPE ASE, MCITP: SA+VA, VCP 3/4/5, VMware vExpert XO (14 stars)
VMUG Russia Leader
http://t.me/beerpanda
Reply
0 Kudos
td3201
Contributor
Contributor
Jump to solution

Have any tips as to why I am seeing this difference then? From a clustered filesystem perspective, I think of distributed locks as being a bottleneck but those wouldn't occur in block operations like a dd, only for vmfs-wide operations such as snapshots, migrations, etc. No?

Reply
0 Kudos
td3201
Contributor
Contributor
Jump to solution

I created another LUN thick without fault tolerance and here's the result of the same test:

# dd if=/dev/zero bs=4k count=200000 of=/test/test && rm -f /test/test

200000+0 records in

200000+0 records out

819200000 bytes (819 MB) copied, 21.7459 seconds, 37.7 MB/s

I would love to blame the storage and everyting in between but I just see much better writes with RDM which is where I am puzzled.

Reply
0 Kudos
AntonVZhbankov
Immortal
Immortal
Jump to solution

>I created another LUN thick without fault tolerance and here's the result of the same test:

Create WITH fault tolerance.


---

MCITP: SA+VA, VCP 3/4, VMware vExpert

http://blog.vadmin.ru

EMCCAe, HPE ASE, MCITP: SA+VA, VCP 3/4/5, VMware vExpert XO (14 stars)
VMUG Russia Leader
http://t.me/beerpanda
Reply
0 Kudos
td3201
Contributor
Contributor
Jump to solution

I got a lot better with fault tolerance:

root@server ~# dd if=/dev/zero bs=4k count=200000 of=/test/test && rm -f /test/test

200000+0 records in

200000+0 records out

819200000 bytes (819 MB) copied, 8.77299 seconds, 93.4 MB/s

I have to do some substantive reading but what's your take on why?

Reply
0 Kudos
bulletp31
Enthusiast
Enthusiast
Jump to solution

Kool tests.

What sort of SCSI adapter are you pairing with the vmdk?

Reply
0 Kudos
td3201
Contributor
Contributor
Jump to solution

ISCSI, no hardware HBA, just using the server NICS.

Reply
0 Kudos
rickardnobel
Champion
Champion
Jump to solution

I got a lot better with fault tolerance:

root@server ~# dd if=/dev/zero bs=4k count=200000 of=/test/test && rm -f /test/test

200000+0 records in

200000+0 records out

819200000 bytes (819 MB) copied, 8.77299 seconds, 93.4 MB/s

I have to do some substantive reading but what's your take on why?

When you enable the FT option on the disk it becomes an eagerzeroed-disk, that is a disk where all sectors are zeroed out at creation time and not on the first write.

>The 9000 was an attempt to line up my writes with the underlying jumbo frame configuration of 9000 bytes.

One thing also to note is that even when the MTU is 9000 there is overhead to IP, TCP and iSCSI/other storage protocol, perhaps around 60 bytes per frame. So if you would like to do something like this on other tests you should lower it somewhat.

My VMware blog: www.rickardnobel.se
Reply
0 Kudos
RParker
Immortal
Immortal
Jump to solution

That potential embarrassment aside:

Well the embarrassment would only be on your configuration, not from VM Ware. Because there are numerous OTHER tests that prove that VMFS and RAW are almost identical. Maybe VMFS is slightly less, but only depending on application, because RDM really offers ZERO performance benefit.

So something between these 2 are different, because there should NOT be this much difference in speed. We have conducted tests, and I tried to blame VMFS and datastores in the past, and found this to be true as well. VMFS really isn't the bottleneck.

I can do similar benchmarks one way, only to find another benchmark disproves it another. RAW provides access to the cache also.. whereas VMDK does not, so MAYBE the RAW is able to get to the cache info, thus the artificially inflated number.

Reply
0 Kudos
td3201
Contributor
Contributor
Jump to solution

The embarrassment comment stemmed from me attempting to line up my I/O transfers with the underyling ethernet frame size.

I agree with your comments. I am searching for the underlying root cause for the difference. Could it be VMFS-related locking or scsi reservations? If it is a caching issue, would I see inflated scsi command latencies (i'm not seeing those btw).

Reply
0 Kudos
AntonVZhbankov
Immortal
Immortal
Jump to solution

>The embarrassment comment stemmed from me attempting to line up my I/O transfers with the underyling ethernet frame size.

Do not forget that iSCSI uses CPU power, and in case of guest OS iSCSI initiator it uses VM's vCPU, while RDM & vmdk use ESX CPU leaving all vCPU power to VM.


---

MCITP: SA+VA, VCP 3/4, VMware vExpert

http://blog.vadmin.ru

EMCCAe, HPE ASE, MCITP: SA+VA, VCP 3/4/5, VMware vExpert XO (14 stars)
VMUG Russia Leader
http://t.me/beerpanda
Reply
0 Kudos
td3201
Contributor
Contributor
Jump to solution

We don't deploy initiators inside of the guest OS, only RDM and VMDK.

Reply
0 Kudos