We are planning to run VMware using iSCSI. The plan is to move many of our big file and print servers to it.
We need to know the best practice on storage - use vmdk files for all drives or just for the OS (C:) drive and then use the MS iSCSI initiator from within the OS for the data drives. Some of the data drives could be as large as 300-500gb. Part 2 - do we still need to routinely defrag them from within the OS?
We are planning to run VMware using iSCSI. The plan
is to move many of our big file and print servers to
it.
We need to know the best practice on storage - use
vmdk files for all drives or just for the OS (C:)
drive and then use the MS iSCSI initiator from within
the OS for the data drives.
It works but this way you won't have the chance to do vmotion (when your c: drives reside on local disk)
I would prefere iscsi san for all drives and the best is to use iscsi hbas - when your storage supports it.
>Part 2 - do we still
need to routinely defrag them from within the OS?
YES.
If you question is performance related I would say that you don't buy anything in running the Windows iSCSI initiator ..... perhaps it could even be worse than letting ESX using its own initiator and put the vmdk on it.
Massimo.
Well all the vmdk files would be on the iSCSI SAN so why wouldn't I have vmotion capabilities?
I know it would make a neater package per vm to have them all together - OS drive and data drive - but I was wondering basically if anyone does it the other way mentioned - just OS via vmdk and all other data on the the VM via the MS initiator to iSCSI....
We plan to try VCB as well (I know it's not supported) so I thought that doing all vmdk would make this easier to backup.
We do plan on using the VMware software initiator - not extra iSCSI cards. Is that also a big performance issue?
Well all the vmdk files would be on the iSCSI SAN so
why wouldn't I have vmotion capabilities?
In this case you have it.
I wrote "when your c: drives reside on local disk"
I tested it (iscsi initiator in vm) - saw right high cpu utilisation in vm but higher throughput than by esx software initiator - tested the vmotion over load and always got an isolated vm (other tested it too).
By managing of it don't forget the count of ip addresses (for iscsi initiators in vms) you have to manage.
When you want to create your c: disks on iscsi san then you must use either esx software initiator or iscsi hba and then in addition you will configure iscsi initiators in all vms - seems to be complex for me.
We plan to try VCB as well (I know it's not
supported) so I thought that doing all vmdk would
make this easier to backup.
VCB will be totally ignorant of any drive mappings from within the guest OS (i.e. MS iSCSI initiator), so if you're thinking of that at all, you've got your answer...
If you use the MS iSCSI from within the guests instead of vmware vmdk files on vmfs partition, you will most likely disable many features.
No snap shots, or VCB
If anyone has done testing with regard to vmotion using it ms iSCSI, it would be nice to know. I have had one individual here suggest this as an option and I really didn't like that because it gets away from the concept of the entire environment being encapsulated, and is dependant on configurations in the operating systems. i.e. you won't see the configuration from vcenter.
Regards,
Jon
so......you can't snapshot a LUN if it is bound to a VM via MS iSCSI Initiator??? What else can't you do?
How else do you bind a volume to a VM?
Adrian, what you plan to do is quite common concept, some of the answers above are just theoretical. To give you the best answer I would have to know which iSCSI solution you plan to use. This generates various options.
Generally:
Initiating iSCSI from inside VM locks you some VMware features but usually opens the door to some SAN-level features (eg. application-aware snapshots). Don't be sad, VMware VMDK snapshot are a drawback for some operations.
It's common to have a LUN initiated through Microsoft Initiator inside VM and running a path aggregation to increase throughput for sequential traffic.
You need common server-class NICs instead of HBAs for this design. HBAs don't appear as network card usable for iSCSI inside the VM.
You will need some CPU power inside the VM for the iSCSI initiator. Plan it in your design.
I've tested VMotion migrations with VM-level initiator running under load with iStor array with no problem.
--
christianZ, I've different experience VMotioning a VM with VM-level iSCSI initiator under load. I did it successfully with 3.5, Microsoft Initiator, Windows Server 2003 R2, iStor iS512. What is more, during the VMotion operation I've used 2 connections aggregated through the MS initiator Multiple Connections per Session feature.
To be safe, I'm curious what setup you used for the tests?
--