VMware Cloud Community
daveclaussen
Enthusiast
Enthusiast
Jump to solution

iSCSI SAN, ESX4 Datastore Sizing Best Practices

Hi all. I am in the final stages of our vSphere construction and I looking for best practices in terms of sizing datastores/LUNs, on an iSCSI SAN, for multiple ESX4 hosts.

What we have:

Two HP c7000 Blade Enclosures with three BL460c's in each enclosure.

All servers will run ESX4 and use an iSCSI (software initiator) SAN.

I am not sure of the best way to configure these items. One large LUN holding one large datastore? A few smaller LUNs each holding a datastore? A LUN/datastore per ESX host? I do want to use vMotion and that might help narrow down the answer.

Any help all of you seasoned VMware folks can give this noob is much appreciated.

- Dave

Tags (3)
Reply
0 Kudos
1 Solution

Accepted Solutions
amvmware
Expert
Expert
Jump to solution

Dave

This is my rule of thumb.

* The virtual machines will be mixed on the datastores to maximise the storage efficiency and I/O efficiency - mix low I/O virtual machines with high I/O virtual machines on the same datastore.


* A maximum of 12 - 16 virtual machines per VMFS datastore.


* A maximum of 16 VMDK files per VMFS datastore.


* One VMFS volume per storage LUN.


* At least 15% to 20% of a VMFS datastore should be left as free space to accommodate requirements such as virtual machine swap files and snapshots.




Based on your requirements - you will initially have 30 VM's, so you should look to configure 2 datastores - but the number of vmdk files or IO requirements for a VM may change that.



View solution in original post

Reply
0 Kudos
6 Replies
amvmware
Expert
Expert
Jump to solution

VMware best practise recommends a single datastore per lun.

vMotion has no impact on your datastore design. - Storage vMotion might.

How many VM's do you plan on deploying.

Are they all a stand size.

What is your largest VM.

Have you performed any storage analysis to understand your storage requirements.

Reply
0 Kudos
daveclaussen
Enthusiast
Enthusiast
Jump to solution

Answers:

VMware best practise recommends a single datastore per lun.

vMotion has no impact on your datastore design. - Storage vMotion might.

How many VM's do you plan on deploying.

Starting with 5 per host - hoping for more.

Are they all a stand size.

Yes

What is your largest VM.

~75GB

Have you performed any storage analysis to understand your storage requirements.

No

So would you think a 500GB LUN for all six of these ESX hosts to use would be a good idea? The size is expandable on the SAN side.

Reply
0 Kudos
amvmware
Expert
Expert
Jump to solution

Dave

This is my rule of thumb.

* The virtual machines will be mixed on the datastores to maximise the storage efficiency and I/O efficiency - mix low I/O virtual machines with high I/O virtual machines on the same datastore.


* A maximum of 12 - 16 virtual machines per VMFS datastore.


* A maximum of 16 VMDK files per VMFS datastore.


* One VMFS volume per storage LUN.


* At least 15% to 20% of a VMFS datastore should be left as free space to accommodate requirements such as virtual machine swap files and snapshots.




Based on your requirements - you will initially have 30 VM's, so you should look to configure 2 datastores - but the number of vmdk files or IO requirements for a VM may change that.



Reply
0 Kudos
daveclaussen
Enthusiast
Enthusiast
Jump to solution

Understood.

I will start with two, 250GB datastores - each on its own LUN.

And you say that even thought these VM's will reside on separate datastores, vMotion will still work across all six ESX hosts?

Reply
0 Kudos
amvmware
Expert
Expert
Jump to solution

Dave

Correct - Just as long as you meet the requirements for vmotion - shared storage, Gb nics, consistent naming convention for vswitches then it is fine. With vmotion the VM files do not move anywhere, it is the esx host in the cluster that provides resources to the VM that changes. Only with Storage vmotion do the files move.

Reply
0 Kudos
daveclaussen
Enthusiast
Enthusiast
Jump to solution

Thanks for your help. Now I have a plan!

Reply
0 Kudos