Hi All,
I'm new to VMware and I currently have 500GB of space which I can export via NFS or iSCSI. I would like all VMs to run off this storage. I will also have a VM running on ESX3 that requires about 100GB of storage.
Which would be the most "optimal (performance/manageability/etc?)" deployment method?
1. NFS export to the ESX host, create the VM with a 100GB VMDK file
2. NFS export to the ESX host, create the VM with 8GB VMDK file (for OS) and then within the Guest OS, perform a NFS mount for the 100GB
3. iSCSI present to the ESX host, create the VM with a 100GB VMDK file
4. iSCSI present to the ESX host, create the VM with 8GB VMDK file (for OS) and then within the Guest OS, mount via iSCSI for the 100GB
Does anyone have any insight or experience to share? Any pros and cos to consider?
Many thanks in advance!
Create two iSCSI luns (250 Gb).
Put the OS of the file server on one lun. Put the data disk on the other. All use VMDK on VMFS via iSCSI. Serve and enjoy. Use hardware iSCSI if possible for better performance.
iSCSI via ESX has lower overhead than iSCSI via OS on a VM due to networking double-copy.
Is this for production use? If so go for iscsi.
Thanks for the suggestion. But which would be "better"?
---
3. iSCSI present to the ESX host, create the VM with a 100GB VMDK file
4. iSCSI present to the ESX host, create the VM with 8GB VMDK file (for OS) and then within the Guest OS, mount via iSCSI for the 100GB
---
I have created 50GB VM's using iscsi, not tired to mount further disk via the guest but I would expect the performace to be the same.
Hello,
There are a number of similar posts on here you can reference for this. My personal preference is to stick with VMDKs on a VMFS filesystem unless you need to leverage some snapshot/backup functionality on your iSCSI storage solution. You will maintain the portability of your disk drives and give up nothing.
Not sure what your plans are but for NFS you could get by with a VMware VI Starter license. It was never something we considered but might be something of interest to you. With Starter you go up to 4 CPU sockets, 8GB of RAM and NFS is the only available remote disk technology enabled.
Hello,
Another option is to create an iSCSI VMFS, then another iSCSI partition to use as a RDM. Create the 8GB VMDK on the VMFS and then the RDM on the iSCSI partition. Since everything may only 1Gbe, I am not sure it makes much difference from a performance point of view. But, I like iSCSI over NFS as it uses a VMFS3 file system and not NFS, which has its own issues.
You could try each option and see which performs better for you as well.
Best regards,
Edward
True, I am not a fan of RDM though unless you have a reason to such as the ability to use SAN based replication, snapshot, backup, etc tools. The performance difference is negligible and you lose some flexibility that you get with the encapsulated VMDK approach.
Hi VirtualNoitall,
Prior to my original post, I've went through a few of the sub-forums and yes indeed there are a number of posts referencing NFS/iSCSI. But they are limited to the choice of connecting storage to ESX using NFS vs iSCSI, hence the purpose of my post to find out more with reference to how the size of the VM image will have an impact on performance.
My thoughts are that if we create a large VM image on NFS/iSCSI, the VM will talk to ESX to perform i/o on this "large" disk whereas if the VM directly accesses NFS/iSCSI, the VM can do the necessary network encapsulation and send it out the network, hence reducing(?) overheads on the ESX. Does anyone have insight to share on this?
Yeap, you are right that if we use NFS, we can use "Starter" license instead of "Standard". I think NFS is also easier to backup (i.e. rsync/copy the files from the NFS server/directory) vs having to use some specialised tool to backup iSCSI LUNs and volumes?
Hello,
Some of the recent forum posts as well have been around the performance of RDM vs VMDK which is similar to your question. I would always lean towards encapsulated vmdks. In our experience performance differences are usually negligible and I would rather not give up the flexibility and portability of the vmdks.
the only time I would not is if I wanted to leverage SAN\storage technologies such as SAN replication, snapshots, native backups, etc.
Create two iSCSI luns (250 Gb).
Put the OS of the file server on one lun. Put the data disk on the other. All use VMDK on VMFS via iSCSI. Serve and enjoy. Use hardware iSCSI if possible for better performance.
iSCSI via ESX has lower overhead than iSCSI via OS on a VM due to networking double-copy.
Hi VirtualNoitall,
From what I've read, RDM is different from directly accessing NFS/iSCSI from the VM cos RDM sorts of allows the VM raw access to storage that the ESX has access to.
i.e. VM->RDM->ESX->Storage vs VM->Network(via ESX)->NFS/iSCSI
Nevertheless thanks for the suggestion.
Hi Ellers, thanks for pointing out that there is additional overheads due to networking double-copy!
Yes, the same principal exists though. I much prefer encapsulated virtual machines over making external disk visable directly to the virtual machine.