VMware Cloud Community
Mjm507
Contributor
Contributor

Increase vmdk write speed on single vm

Dears. 

We do have a VM with a very high intense application that requires very fast write speed. I do believe it's a storage issue since we are running these VM on an all-flash array. I am trying to increase this single VM write speed as requested by the application team.

I am thinking of the following. 

  • convert vmdk to thick 
  • change vmdk shares value to high (2000)
  • change controller from LSI to Paravirtual

is there any other way to increase the write speed? 

 

 

0 Kudos
9 Replies
depping
Leadership
Leadership

I would recommend to first figure out where you are experiencing a bottleneck. You can use something like esxtop to see where latency occurs. Of course, all mentioned options could help, but if the storage layer is the issue, then none of the suggestions would actually change anything.

0 Kudos
kastlr
Expert
Expert

Hi,

 

beside esxtop you could also use vSCSIstats to get an even better understanding about the IO profile used by the aoplication.

And of course you should also collect performance stats from inside the Guest OS.

Depending on the findings, the kind of OS/application and the kind of storage used we might be able to assist you further.

 

Regards,

Ralf


Hope this helps a bit.
Greetings from Germany. (CEST)
0 Kudos
Mjm507
Contributor
Contributor

Hi Ralf 

 

Thanks for your response. Unfortunately, I do not have the privilege to understand much about the Application that running on this VM. 

It's a customer VM and they are not helping us. They are just demanding more write speed. I know for a fact that our SAN storage is not the issue and we are not running the VM on the crowded host. They are just demanding very high write speed for this intensive application.

I am planning to change the storage controller from LSI to VMware paravirtual but I am afraid it will break something or corrupt the VM. The VM Guest OS is ubuntu 20.0.4. I do believe that all I can do to improve the situation. Is there any other trick I could use from the VMware perspective?

 

Regards 

MMA 

depping
Leadership
Leadership

But couldn't you at least look at the data to figure out where the bottleneck is? If there even is one? Could indeed be the disk controller easily. Could be the fact that ALL IO is going to a single VMDK? It could be that you are hitting an HBA bottleneck.

0 Kudos
kastlr
Expert
Expert

Hi,

 

any of the changes that will increase IO performance would require customer cooperation (if not running on vSAN).

Those changes would be

  • switch from LSI to paravirtualized SCSI controller
  • offer multiple vmdks attached to multiple SCSI controllers
  • using Guest OS LVM to create a striped filesystem using those multiple vmdks

But all of those changes might not solve the problem if the application would send IOs without/only a few OIOs.

I had several "performance" discussions with customers complaining about IO performance when using Windows Explorer to copy one large file.

 

I would strongly recommend to use esxtop or vSCSIstats as those tools will help you to characterize the workload.

They could document that the problem might be caused by the application simply not using parallelism when sending IOs to the disk.

And in such a case it might not be the optimal solution to place such workload on a shared storage array.


Hope this helps a bit.
Greetings from Germany. (CEST)
0 Kudos
Mjm507
Contributor
Contributor

Indeed sir they are running this application with one single vmdk. The disk size is 8 TB.  I do believe they are not flowing best practices for this application. Thanks for your response.

 

 

 

0 Kudos
Mjm507
Contributor
Contributor

Thanks for your response. you have been a huge help. I will look into this more using tools that you have mentioned hopefully I can find something useful.

 

Regards.

MMA 

 

0 Kudos
Mhd_Damfs
Contributor
Contributor

i'm actually using this solution in write intenssive VM and i've seen an uplift of at least 30%  in writting speed and 60% in IOPS , howerver the disk utilisation for my 3 disks is 85-80-60%

0 Kudos
compdigit44
Enthusiast
Enthusiast

Are the host connect to the flash array via iSCSI, FC, NFS or direct attached storage? Are the hosts overloaded with CPU cycles? 

0 Kudos