Looking for some opinions on the subject of providing guest iSCSI access to our SAN.
Right now our hosts are configured with two NICs for the vmkernel iSCSI connection, and two for guest (or pass-through) iSCSI. All four NICs connected to our iSCSI switches. I'm only using the guest iSCSI on a handful of test/dev SQL servers. The storage manufacturer recommended guest iSCSI so that their tools would handle volume management for apps like Exchange and SQL. I'm finding the extra layer of management may not be worth it, and we occasionally lose a volume inside the VM and have to reboot the VM to reconnect properly.
So I'm looking for advice as to whether it would be better to move the two guest iSCSI connections to either my vmkernel iSCSI or VM network, and instead of using guest iSCSI use VMDKs? I believe from a performance standpoint VMDKs would be fine, the largest volume is currently 220 GB, and we are not a large company (less than 200 employees, fewer than 30 VMs). We are not using dedicated disks for the SQL drives, we use a frameless architecture.
Anyone have pros or cons they would like to share?
To me the biggest question is how are you going to back up the Exchange and SQL boxes and would making this change impact that.
We use app-aware agents in these VMs and SAN as the transport mode, and that same method after migrating to VMDKs. Instead of the storage tools snapping the volume, VMware would snap them. Usually when we lose a guest-attached volume, it is during the VSS snap period, so we would probably gain reliability moving to virtual disks.
If you gain reliability and simplify the setup at the same time, then that sounds great!
The keys issues are backups. Say you have a 220GB volume of which you need to back up 150GB. If it's a VMDK, you're going to read the entire 150GB dataset and write out 150GB over to your backup server. If you're using a vendor's snapshot tool, your ESX host will quiesce the database, create a snapshot on the storage system, and resume operations. You're transferring next to no data through your ESX hosts. You can now back up the data directly from the storage system (assuming you have that option and it's configured).
By using smart tools, you've saved 300GB of data coming into and back out of your ESX host. That's a LOT of bandwidth that can be used by the guests doing real work.
Thankfully this is not a big problem for us. Our backup media servers are configured with iSCSI connections to the SAN. The backups run across the SAN and not through the ESX host network. Backing up a VMFS volume is similar to the backup of an NTFS volume. We will continue to use the backup agent in the VM to quiesce the file system. This actually works quite well for us, so the backup aspect is not the main sticking point. The main sticking point is, do we continue to support guest iSCSI when we only use it for a handful of VMs, or do we go all VMDK and simplify our setup?
The main sticking point is, do we continue to support guest iSCSI when we only use it for a handful of VMs, or do we go all VMDK and simplify our setup?
When letting the guests do the iSCSI they will have to have a little more intelligence and knowledge of the environment, which is a great virtualization feature if they do not. If doing Vmkernel iSCSI and the guest only sees some local disks you much more flexibility to change stuff in the future, like migrate the vmdks to another SAN (other iSCSI or perhaps to FC or NFS) without having to reconfigure anything in the guests.
You also have the potential risk that some guest becomes "crazy" and do incorrect actions against the iSCSI targets or flooding the iSCSI vlan or something else that could create loss of service for others.
...If doing Vmkernel iSCSI and the guest only sees some local disks you much more flexibility to change stuff in the future, like migrate the vmdks to another SAN (other iSCSI or perhaps to FC or NFS) without having to reconfigure anything in the guests.
You are correct, there are a lot of reasons to keep your disks virtual, like flexability, reliability, ease of management. These are all the reasons I want to discontinue using guest iSCSI access. I'm wondering if there is anyone out there that has moved all their VMs away from direct access to the SAN, and do they have any regrets? Would re-allocating those two pNICs to VM traffic be the smart move or would I be shooting myself in the foot because in the future I'll have that one critical VM that requires direct SAN access?