I'm in the final steps of planning to consolidate 15 physical servers onto 2 VMware hosts. My intention is to use an iSCSI SAN as the data store. I'm having trouble finding information on the specifics of consolidating file shares (CIFS/SMB).
I have close to 50 file shares spread across the 15 physical servers. I've often read about how SANs and virtualization will "eliminate islands of data" but very few specifics about exactly how to do that. Can someone point me in the right direction? Are there any good tech notes, whitepapers or other resources out there?
Is it possible to simply consolidate all of the data onto a single partition on the SAN and then connect to that partion from one of the VMs? Then once the VM is connected, share it out via CIFS/SMB built into the guest Windows OS?
Am I making this too complicated? Should I just use the SAN's built in SMB file sharing system as an end-run around Windows? The one thing that comes to mind in that scenario is the SAN software compatability with Active Directory for controlling ACLs to the shares.
What about the performance of the SAN versus a virtualized server hosting file shares? Is serving up 25+ file shares going to put enough load on the SAN that it will degrade VM access to the data stores?
One final thing that I'm not quite sure about is in my deployment plan I plan on having each physical ESX host connected to two networks with two separate HBAs. One network will be the SAN network for iSCSI traffic and the other will be the main network that the workstations are connected to. Unless the SAN has multiple NICs, I'm going to run into a problem keeping the storage traffic separate from the regular network traffic, right?
Yeah, you might be overthinking this a bit. Just build up your VM to perform whatever roles they will be performing, and then carve out LUN's for your storage to be served up by whatever VM you wish. On my VM's that do nothing else but straight up CIFS shares, I use guest attached iSCSI initiated LUNS (as opposed to storage of the content inside a .vmdk sitting on a VMFS volume.
Everyone's situation is a bit different, but if you have a well designed infrastructure, the demand from flatfile storage shouldn't be a burden. But build one up so that you can learn how to monitor it.
Thanks for the reply. When you talk about "guest attached iSCSI LUNs", are you saying that you connect directly to the iSCSI LUN from within the Guest OS (Windows?)?
Is that functionality built into the OS (Server 2003/2008), or does it require a 3rd party app?
Yes, that is exactly what I'm talking about. I have a Dell/Equallogic PS5000 iscsi SAN unit, where I have a handfull of LUNS that are VMFS volumes. This is where the VM's themselves reside. Then, on VM's that have large flat-file storage needs, or are running SQL, or Exchange, I have LUNS that are created for those purposes (e.g. SQL DB, SQL transaction log files, etc.), only to be attached by one particular VM (it wouldn't be a clustered filesystem like VMFS). The guest OS (win2003, 2008, etc.) attaches to these drives via the guest iscsi initiator built into the guest OS.
This approach is different than say, creating a storage area that resides as a vmdk in the VMFS LUN. There are advantages and disadvantages to both, so you will want to understand those clearly!!! (I don't do the above practice unless there is a need for it, as there is no need to make things more complicated than needed)
SAN solutions will vary depending on their mfr (not sure if you specified what you had). So you will need to understand what capabilities and limitations you might have.
Here's some reading for you: