VMware Cloud Community
JumpR
Contributor
Contributor
Jump to solution

Best Practice: Multiple partitions on a single vmdk or one partition per vmdk

Hello everyone-

I would like to get you your opinions on the best practice for the File Server vmdk setup.

C drive partition would allocated for the OS, while E, F ... partitions for Data storage.

setup 1:

vmdk1 = thick provisioned disk hosting C drive partition

vmdk2 = thick provisioned disk hosting E, F..... partitions

Setup 2

vmdk1 = thick provisioned disk hosting C partition

vmdk2 = thick provisioned disk hosting E partition

vmdk3 = thick provisioned disk hosting F partition

.......

Also the multiple Data partitions would be configured as Independent + Persistent virtual disks due to the snapshots. My logic behind is that OS (C drive) is used for snapshots when testing a new software for example while Data partitions act as storage disks that need to keep most current files regardless of reverting to an older snapshot. BTW data partitions are for regular word, excel, pics and so on.

i also realize that i could have a single Data partition ex. E: with multiple shared folders but since each folder is for a different department it might cause more trouble when growing the space in the future. Large vmdk might take more time to expand. Again not sure.

thank you

Reply
0 Kudos
1 Solution

Accepted Solutions
kastlr
Expert
Expert
Jump to solution

Hi,

in general, virtualisation doesn't change much on disk IO's.

You could use the same rules you would use to size a physical server.

Multiple vmdk's do mean multiple targets for your IO load.

Best solution if high IO load/troughput or low response time should be reached you should create multiple vmdks and distribute them over multiple datastores.

HtH


Hope this helps a bit.
Greetings from Germany. (CEST)

View solution in original post

Reply
0 Kudos
10 Replies
mcowger
Immortal
Immortal
Jump to solution

If the filesystems will be used or treated differently, make them different VMDKs.  It improves flexability for things like moving data between tiers, backups and , as you mention, snapshots.

--Matt VCDX #52 blog.cowger.us
JumpR
Contributor
Contributor
Jump to solution

so lets say 10-15 vmdks is not a bad practice per File Server?

how is the performance vs a single vmdk hosting all the partitions?

Reply
0 Kudos
kastlr
Expert
Expert
Jump to solution

Hi,

in general, virtualisation doesn't change much on disk IO's.

You could use the same rules you would use to size a physical server.

Multiple vmdk's do mean multiple targets for your IO load.

Best solution if high IO load/troughput or low response time should be reached you should create multiple vmdks and distribute them over multiple datastores.

HtH


Hope this helps a bit.
Greetings from Germany. (CEST)
Reply
0 Kudos
mcowger
Immortal
Immortal
Jump to solution

Multiple vmdk's do mean multiple targets for your IO load.

This is an important point.  More LUNs = more devices = more queues = better response time (usually).

Best solution if high IO load/troughput or low response time should be reached you should create multiple vmdks and distribute them over multiple datastores.

Or you could get an array that supports decent automatied tiering (there are many) and let the array figure that out better than you ever could Smiley Happy  HP, EMC, Hitachi, Dell all have arrays that can do this to varying capabilities.

HtH

--Matt VCDX #52 blog.cowger.us
Reply
0 Kudos
JumpR
Contributor
Contributor
Jump to solution

This is an important point.  More LUNs = more devices = more queues = better response time (usually).

So does it mean that each LUN (aka SAN Datastore) is allocated a thread vs each individual vmdk inside the LUN? This way having a small number vmdk LUNs improves the performance.

is this I/O thread management done on the SAN side or ESX side? we have EqualLogic PS5000 and esx server 4.0.0. If on esx side does going to esx 5 improves the thread management?

Reply
0 Kudos
mcowger
Immortal
Immortal
Jump to solution

JumpR wrote:

This is an important point.  More LUNs = more devices = more queues = better response time (usually).

So does it mean that each LUN (aka SAN Datastore) is allocated a thread vs each individual vmdk inside the LUN? This way having a small number vmdk LUNs improves the performance.


Its not a thread, its a queue.  There are many queues, including queues within the guest, within the VM, per VMDK, per LUN, per target, per HBA, etc. 


is this I/O thread management done on the SAN side or ESX side? we have EqualLogic PS5000 and esx server 4.0.0. If on esx side does going to esx 5 improves the thread management?

Its both, because there are queues in both the array and the host (and the VMs)

In general, more LUNs gives (slightly) better performance.  This quickly has diminishing returns however, so going from 1 LUN to 5 is very good.  From 25-30 is probably nothing.

--Matt VCDX #52 blog.cowger.us
JumpR
Contributor
Contributor
Jump to solution

great thank you

i will keep 500GB - 1TB LUNs on our 4TB SAN. File server Data vmdk might get its own LUN but OS vmdk will be bundled up in another LUN. will try to keep around 5-10 servers.

Reply
0 Kudos
kastlr
Expert
Expert
Jump to solution

Hi,

to transfer a disk IO from the ESX Server to the array the HBA does use a command queue.

Even when an IO is immediatly answered/acknowledged (R/W) by the array, this will allocate a slot in the HBA command queue.

With default settings, the queue depth per LUN is set to 30 (Emulex) or 32 (QLogic)

So if you need to transfer a higher amount of parallel IO's to your array, you should have enough physical LUNs to handle the load.

Here's a good document about ESX and queues, but keep in mind the the virtualised OS also uses queuing which isn't part of the document.

Storage Queues and Performance

JumpR wrote:

So does it mean that each LUN (aka SAN Datastore) is allocated a thread vs each individual vmdk inside the LUN?
This way having a small number vmdk LUNs improves the performance.

No, as adatastore with less VM's isn't faster than a datastore with more VM's on it when both are created with the same specs.

If the SAN disk i.e. could server 150 IO/s it wouldn't server 20 IO's faster than 100 IO's, the IO response time will be nearly steady at 6ms (when not served from array cache).

When your VM's would generate more IO's than the LUN could deliver you would face a performance impact, but the LUN doesn't care if the load is created by few or many VM's.

Performance design is an art, that's why a lot of storage companies decide to implement features which enables their arrays to automatically move hot LUN's (or even better tracks) between storage tiers.

I totally agree with mcowger that these features are much better then a manual design, simply because they are dynamic while the other design is static.

Hth

Ralf


Hope this helps a bit.
Greetings from Germany. (CEST)
Reply
0 Kudos
aren
Contributor
Contributor
Jump to solution

Hi all,

is this still valid, with vsphere 6.7 U2 and vSAN?

We are having the same discussion.....

Is there a golden rule?

regards

Aren

Reply
0 Kudos
continuum
Immortal
Immortal
Jump to solution

> Is there a golden rule?

Yes - creating a vmdk with more than one partition is ok when creating a boot vmdk.

Then you have for example an EFI partition, a Recovery partition and a Windows partition. Or if you have a Linux disk you have a partition for / and one for swap.

For all other cases the best option is to create one vmdk for each partition.

A bad idea is to create a Windows vmdk and then create a C: partition and a 😧 partition.


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

Reply
0 Kudos