VMware Cloud Community
JustyC
Enthusiast
Enthusiast

VMFS Sizing & Disk Latency

We have a couple of W2K8 R2 VM's that have DB2 databases on them. The

DB's are growing rapidly, but we don't want to allocate an exorbitant

amount of disk storage right off. The databases are currently 500 GB

now, but will reach 1+ TB after 1 year. Can we create a VMFS datastore

for 1 TB now and when free space is running low add a second extent for 500-1000 GB using vSphere Client.

Also any recommendation on block size when formatting the VMFS. It

appears the larger the VMFS the larger the block size needs to be. We are using a 4 MB block size for a 600 GB datastore.

Backing

up the DB2 database is extremely slow. It takes approx. 5.5 hours now as a

vm on NetApp FAS2050. A similar physical DB2 server with the same data takes

half that time. The vCenter performance tool shows disk latency go from around 5 to 200+ milliseconds as soon as the backup is started.

fyi.... the DB2 has many large files (blobs), which may have impact should the block size not be optimal.

0 Kudos
2 Replies
bsti
Enthusiast
Enthusiast

You're using a NetApp SAN, have you looked into Thin Provisioning at all?

That will allow you to allocate a 1 TB LUN to VMWare for your Datastore, but only take up the amount of size you're actually using on the back end (Aggregate level). So, for instance, even though your datastore is 1 TB, you will only be using 500 GB from your aggregate. You take a slight performance hit when doing this, so I would not recommend this for an OLTP database.

To directly answer your question, yes, you can extend a datastore but I think the limit is 2 TB in size. The following has some good documentation on both thin provisioning and dynamically extending datastores:

http://www.vmware.com/files/pdf/VMW_09Q1_WP_vSphereStorage_P10_R1.pdf

On the issue of your backup times, there are a few issues that could be present:

- Are you seeing CPU or memory stress on the server? Not likely, but it's possible you may have undersized your VM.

- How many connections to the fabric do you have? Are you multipathing? If so, what policy are you using? Did you enable ALUA?

- How many other VMs are in your cluster? Are they all hammering on this same disk too? Are they sharing the connections to the SAN? This would obviously not happen on a physical server.

- FCP, iSCIS, or NFS?

JustyC
Enthusiast
Enthusiast

We have TP configured on our NetApp SAN and should be able to take better advantage of it since we recently upgraded to vSphere. I seem to recall a 2 TB limit on datastores, but not sure whether that is total for the datastore or all extents. Will read throgh the article you offered.

We have not seen any memory or CPU issues on the vm guest, but do see high disk latency on the volume the DB2 backup uses (monitored on the host being used). We migrated the vm to a different host and also tested sending the backup to another volume formatted with 8 MB blocking, but found no improvment in backup time. We are not using multipathing for the iSCSI connection to NetApp. Briefly reviewed this with VMware Support - they felt the two active connections within the vSwitch used soley for iSCSI was OK and that multipathing would not increase the bandwidth. The said the high disk latency was most likely a disk issue within the NetApp box. So..... the jury is still out. The backup volume is not used for anything else other than backup (it's 1 volume, 1 LUN).

0 Kudos