VMware Cloud Community
ischukovde
Contributor
Contributor

Best way to split the disk space for virtual machines on storage

Good day!

There is a 12 GB disk array (NetApp) and 4 ESX servers (now vsphere version 5.5, planned to upgrade to 6.5)

What is the best way to split the disk space for virtual machines: 4 * 3Gb or 2 * 6Gb?

0 Kudos
7 Replies
ischukovde
Contributor
Contributor

Typo: not 12 GB, but 12 TB

0 Kudos
scott28tt
VMware Employee
VMware Employee

Moderator: Thread moved to the vSphere area.


-------------------------------------------------------------------------------------------------------------------------------------------------------------

Although I am a VMware employee I contribute to VMware Communities voluntarily (ie. not in any official capacity)
VMware Training & Certification blog
0 Kudos
continuum
Immortal
Immortal

"Right size your VMDKs"

For a Windows server use something like 100gb - 200gb for the boot-disk with recovery partition and c-partition.

Add larger disks for 😧 and E: and ... where your data lives.

Dont make the boot-disk too large ! and do not use it for more than the small 500mb boot-part and C:


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

0 Kudos
NicolasAlauzet

You ment TB right?

Based on your setup, I would go with 2x6TB.

Also, take a look at this one: https://docs.vmware.com/en/VMware-vSphere/6.5/vsphere-esxi-vcenter-server-65-storage-guide.pdf

-------------------------------------------------------------------
Triple VCIX (CMA-NV-DCV) | vExpert | MCSE | CCNA
0 Kudos
sjesse
Leadership
Leadership

Assuming this is FAS or a system using ONTAP I uld ask NetApp actually, in general the suggest one aggregate per node in an ha  pair, but in some cases they recommend two. If its a lower end model in general should only have one aggregate. If your asking how to split up that aggregate then take some of the other suggestions, in general I try and keep the volumes small enough to hold the most amount of vms that we want per datastore.

0 Kudos
IRIX201110141
Champion
Champion

There is also Netapp E-Series....

I think he means  LUN->Datastore rather than VM vDisk.

My thoughts

- I dont like one large single LUN/VMFS because all eggs in one basket is never a good solution

- Normaly one Controller modules own the LUN and perform the workload. If you have dual Controller the 2nd. one idled all the time

- Snapshots are most of the time LUN based and if you have a need for different policies 2 or more LUNs are smarter. Same for Compression/Dedup

but.... a LUN/Datastore needs to be as big as the need for your largest VM vDisk.

Regards,
Joerg

0 Kudos
sjesse
Leadership
Leadership

This is where clarity about what is being asked is important. There are two different things you need to think of since there there is logical space and physical space

1.)Both the eseries and fas/ontap systems give you options of grouping the disks. Ontap calls the aggregates and santricity calls them volume groups, this is usually a form of raid

2.)Ontap and eseries lets you create logical seperations as well. In ontap you can create volumes and luns, where multiple luns can be in a volume, but volumes can also be used for NFS or CIFS. In eseries its just one lun per volume.

There can be situations where you want multiple smaller aggregates, I think when there are stronger controller heads, so the cpu can work on more volumes at a time. NetApp has what's call volume affinities, which means basically how many volumes it can work at the same time. This is why at least with netapp its a good to check with them as well, as there are performance considerations that should be made. Again in general I think you can make an entire disk shelf one big aggreate so all the disks can work together, and then do smaller luns to split that bigger aggregate up logically.

Eseries I'm not aware of ther recommendation, but I do the same and everything works pretty well.

0 Kudos