VMware Cloud Community
arik01
Contributor
Contributor

ESXI install partiton

Hello, we have dell r720XD server and i am planning to install esxi 5.1 on it.

i have one large raid 10 aray for the purpose of storing data vmdk files.

i am not sure if i can install the esxi boot partition on the same physical disks

i know that the esxi partition does't require much i/o strain from the disks.

but now i read that from vmware pub:

http://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vsphere.install.doc%2FGUID-FF4F7C0F-...

http://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vsphere.install.doc%2FGUID-FF4F7C0F-...

Disk location

Place all data that your virtual machines use on physical disks allocated specifically to virtual machines. Performance is better when you do not place your virtual machines on the disk containing the ESXi boot image. Use physical disks that are large enough to hold disk images that all the virtual machines use.

according to this should i purchase another 2xRAID1 disk dedicated for the esxi boot partition?

or can i safely install esxi on existing raid 10 knowing that it wouldn't implict much on performance?

Thank.

0 Kudos
9 Replies
memaad
Virtuoso
Virtuoso

Hi,

Best is to install ESXI host on USB 4GB or 10 GB SD card which comes along with most of the VMware  compatible physical server.

Why you want to use physical disk for installing ESXI.

Regards

Mohammed

Mohammed | Mark it as helpful or correct if my suggestion is useful.
0 Kudos
arik01
Contributor
Contributor

For redundancy of course.

if my host already have physical storage which i need for my datastore why not to use it as the install partition as well?

0 Kudos
arik01
Contributor
Contributor

any suggestions?

0 Kudos
zXi_Gamer
Virtuoso
Virtuoso

i have one large raid 10 aray for the purpose of storing data vmdk files

You can create another RAID volume group say of 10G and another RAID Volume groups of say 100G [since you have a large RAID array]

while booting you can select to install the esxi image in the 10G and the vms in the other array.

0 Kudos
arik01
Contributor
Contributor

this is not the case, i know that it is possible to install the host on the same physical lun with storage but the vmware document say that i should use dedicated storage for the esxi host to get better performance.

in contradiction to that, vmware letting install the hypervisor on slow sd card...

the question that no one seem to answer is how much i/o (if any) put strain on the esxi install disk?

0 Kudos
zXi_Gamer
Virtuoso
Virtuoso

but the vmware document say that i should use dedicated storage for the esxi host to get better performance.

This is for better performance or best practices.

in contradiction to that, vmware letting install the hypervisor on slow sd card...

There is no contradiction here. If you have a USB or SD card, you will bypass the traffic from the vm's access to disks and the hypervisor's access to the disks.

how much i/o (if any) put strain on the esxi install disk?

Well, if that is your actual question, then there is no exact list to be given, but of the administrator's prespective to choose.

However, considering your question, if you take a look at the / partitions of the hypervisor, you mainly use it for booting, dumping the iso files for tools installations, dumping the logs [which could be increasing on having a superior infrastructure] and most importantly, storing the configuration files and the process nodes in runtime [vsish]

The amount of strain you put is entirely based on the result of swapping indirectly. For example, IF you have high config VMs in datastores which request more RWs, then the access of the configuration files, loggings might be impacted or slower. But if you try to avoid the swapping [writes to disks], then you can have vms and esxi happily living together in local disks itself.

Again, when you have a RAID failover and RAID rebuilt in process, it would be both the hypervisor as well as the VM read/writes to the disks getting slower.

Hope it helps,

zXi

0 Kudos
zXi_Gamer
Virtuoso
Virtuoso

in contradiction to that, vmware letting install the hypervisor on slow sd card...

Again, the reason for this being that once booted from usb/sd card, the esxi gets loaded to the memory and from in-memory access is much faster than disk read/write. So the fact that the sd card is slower doesnt matter once you boot up

Boot a esxi image from usb disk, after booting, try to remove the usb disk, your server will function still. At some intervals, the backup.sh file will be called to write the changes during which you might face error.

0 Kudos
arik01
Contributor
Contributor

OK if i remove the usb and the os keep operation without any problems that only mean one thing:

there is no i/o activity or swapping activity on system disk , so from that i understand that there is no difference if i put the OS on the same disk with data since the esxi OS load into memory and doesn't strain the disks.

got it right?

0 Kudos
arik01
Contributor
Contributor

is there any documentation how much disk i/o the scratch/swap disk can get to?

i need to decide whetever installing the esxi on the same big raid 10 aray together with data , or to build aditional 2 disks raid 1 aray dedictaed for esxi OS and scratch for performance gain.

this topic is very confusing and althought vmware recomends installing the OS on dedicated disks i can't figure out the reason and any documentation about it.

i know that the esxi OS get completely loaded into RAM it's just the scratch space i didn't sure about...

0 Kudos