I was just at a customer site setting up 3.5 ... cool stuff =)! Foundation is a really good licensing option for small businesses.
Anyway! We were discussing configuration options for virtual machines and they hit me with something that I wasn't expecting ... they had the idea that they may want to use iSCSI connectors from inside of a VM to connect to datastores for their database VM. My first reaction was no-way, then I thought about it for a while, and decided that there is no reason that it wouldn't be possible (adding the proper port groups to the vSwitches etc); however I mentioned that it would probably make a huge performance / resource hit to do it this way.
I am looking on advice from others that run databases in VMs on how they setup their storage. The one requirement that this customer has is that they be able to easily 'clone' the live database for use in test VM environments. I suggested just doing a backup on the database then restoring / mounting, but they didnt like that idea.
The environment: ESX VI3 Foundation w/ 3 blades @ 8x2.66Ghz, 12-16GB Ram. HP Blade Chassie, two networks; one for network traffic, one for iSCSI, Console, Mgt etc. Software iSCSI.
Would it be best to have three LUN's; one boot, one database, one transaction logs and use the SAN based collection/replication utilities (Equallogic), or 2 LUNs, one boot, on DB&logs so that VI3 could be used to clone the datastore (or possibably running iSCSI within a VM to a LUN).
Last question: how are people using VMware to leverage testing on production Database servers; does anyone 'clone' a live DB server and put it into a virtualized test network? Basically they want an easy way to get nearly up-to-date DB data accessible for the programmers to play with.
Thanks in advance for any suggestions!
We use the same system here as Chris. We have 4 BL685c Proliant blades with 8 network ports per blade. We dedicate 2 pNICs (same as Chris) to iscsi and have no problems running both SQL and Exchange over iSCSI. Maybe once n-port virtualisation allows virtual HBAs we might change to FCP (assuming Netapp SnapDrive/SnapManager support the configuration). In the mean time the connectivity is pretty good and our vitual Exchange vs. previous physical Exchange performance is indistinguishable.
Now, if only the vcb snapshot sync_driver didn't cause I/O problems within Exchange I'd be onto a winner.
Gary
Hi,
yes it is possible, but wouldn´t do that, because your storage could saturate the vmnet with iscsi traffic when using iscsi within vm´s.
kind regards,
Reinhard.
ps: Award points if you find answers helpful. Thanks.
Remember, before 3.0, this was the only way of using iscsi with vmware. I have several customers doing just what you described. They are able to take a snapshot of their database, and then mount it to another server for data mining.
I have seen performance charts from SAN vendors that show the Microsoft iscsi initator outperforming the vmware one. (at least in 3.0.x)
This setup is a little weak for what you're talking about.
If you could dedicate at least one NIC for each iSCSI initiator (one for ESX, one for MS iSCSI), that would be much better. In a better setup, you'd have two NICs for each, teamed together, and spanned across switches for redundancy.
I have run the MS iSCSI initiator from within VM's. Aside from increased CPU use, the performance is much, much better than the ESX iSCSI initiator (3.0.2, but I don't expect major differences with 3.5). We are currently migrating from FC to iSCSI, and will be giving many of the MS VM's their own data drives using the MS iSCSI initiator on their own LUNs. The SAN-based snapshotting and other features make this a much more desirable setup than everything in VMDK files.
So yes, that can work, but no, I wouldn't do it in a blade center.
The approach recommended in the Equallogic class that I attended was to use the iSCSI initiator running inside the VM for "big" stuff. The definition of "big" was a bit loose, but generally high IO or large volumes. We're running our SQL server and large file server this way, with the system drive on a VMFS file system and the data directly on the SAN.
So why would you not do this in a blade center? I admit that this is my first experience with the blades (usually I have Proliants). The setup is currently that each blade has 2 'NICs' or whatever they are in the blade (since they do not have vmotion or DRS they really dont need any more) which connect to two groups of physical NICs on the back of the blade center (bundles of 5 for loadbalance / redundancy). One of these is setup for iSCSI, the other is setup for Console & VMNetwork.
We use it without issue for Exchange and SQL Servers.
Performance is fine....we dedicate a virtual switch with 2 pNICs for this, which I think is a must. The traffic also has its own VLAN.
The reason we use it is because we use Netapp SnapManager within the VMs, and this is the only way this is possible.
Chris
We use the same system here as Chris. We have 4 BL685c Proliant blades with 8 network ports per blade. We dedicate 2 pNICs (same as Chris) to iscsi and have no problems running both SQL and Exchange over iSCSI. Maybe once n-port virtualisation allows virtual HBAs we might change to FCP (assuming Netapp SnapDrive/SnapManager support the configuration). In the mean time the connectivity is pretty good and our vitual Exchange vs. previous physical Exchange performance is indistinguishable.
Now, if only the vcb snapshot sync_driver didn't cause I/O problems within Exchange I'd be onto a winner.
Gary
I just wouldn't do this with as few physical NICs as you have to dedicate to each machine.
I can easily saturate a 1GiB NIC with iSCSI traffic. From a single VM.
Throw a bunch of VM's, now they're all contending for the same I/O. Solution? Add more NICs and distribute the load.
There are limitations to how many initiators you can have with ESX, so proper planning is paramount. However, with the right amount of resources you can certainly have this working beautifully - but like I said, I wouldn't do it with 2 NIC's - you're just asking for trouble in a production environment.
Sounds like a very good idea; I will suggest that they purchase an additional card for the blades to add NICs if they choose to do iSCSI within the VM, and an aditional port group on the blade center.