Hi there,
OK, so my setup is a HP C7000 Blade Enclosure with 8 BL460's connected via dual SAS controllers to 3 HP msa2000sas storage arrays each with 12 1TB disks fitted.
We've been asked to create a couple of large volumes 8TB for use as file shares on a file server (Windows server 2008 under vmware esx 4). The problem comes because I have a backup server on the network aswell (Windows server 2008, not under vmware) which has direct connection to the SAS storage arrays, and the intention is that this server will also have access to the large volumes so it can dump the data to tape direct from the storage array, instead of dragging it all over the network from the file server.
At the moment I have created an 8TB VDisk and filled it with a single 8TB Volume and presented that as a single LUN.
If the file server was not installed under VMWare I am assuming the 8TB LUN would have appeared to Windows Server 2008 no problems and I could format as a single NTFS partition and away I go, and it would also appear to the backup server as an 8TB NTFS volume.
As my file server sits underneath VMWare ESX 4 as expected, it doesnt see the 8TB LUN.
Soooo. Whats my options.
Create 8x1TB volume set within the 8TB vdisk and present these as 8xRDMs to Windows. If I then use Windows to stripe these together into an 8TB dynamic volume, the Backup Server running outside of vmware isnt going to make much sense of these is it? (or is it?)
Apart from bringing my Backup server under VMWare and then using extents to create an 8TB vmware vmfs disk out of 8x1TB LUNS which would be visible to two independent vmware machines on separate ESX hosts, i dont see what else I can do.
Are there any ways around this 2TB limit. It seems insane that we are limited to 2TB these days. Its not going to be too long before we have individual disks that are already too large for vmware.
You've got a tough row to hoe, here. I think I can propose 2 solutions for you, but I'm not sure either will work. First, there is no way around the 2TB limit currently imposed in VMware, as has been stated. So, how do we fake it?
1) I would recommend spanning rather than striping on the drive you create in Windows. This should allow it to be presented to another system, but I have not tested this myself. I believe you will have to make it a "dynamic disk" to do this, using GPT rather than MBR.
2) NTFS is NOT a cluster filesystem. You cannot present a volume to 2 systems for R/W access simultaneously without making those systems into a cluster. Your best option here is to use a SAN-based snapshot on the MSA (I believe they come licensed to support this). Otherwise, you are stuck with the network-based backup for RDMs.
2a) If you use VMDKs on VMFS to make your drives, then you can still do snapshot-based backups direct from disk with VCB or another VMFS-/VMDK-aware tool. In that case, you won't have the storage overhead of the snapshot per-se, but you still have to take into account room for the VMDK snapshot on your VMFS filesystems. In this case, I would start with 5 2TB LUNs with 1.6TB VMDKs each. I would think this would provide sufficient room for snapshot disks, but you need to take your environment into consideration on this setting.
Happy virtualizing!
JP
Please consider awarding points to helpful or correct replies.
I'm also making quite a big assumption here that 2 windows server 2008 machines plugged into the same sas switch would be able to simultaneously access the same LUN for both read and write.
There is no way around the 2 TB limit - you have two options present virtual disks or RDMs and have the o/s stripe across them or install a software iSCSI intiator inside the VM and present the 8 TB storage through iSCSI
If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
You've got a tough row to hoe, here. I think I can propose 2 solutions for you, but I'm not sure either will work. First, there is no way around the 2TB limit currently imposed in VMware, as has been stated. So, how do we fake it?
1) I would recommend spanning rather than striping on the drive you create in Windows. This should allow it to be presented to another system, but I have not tested this myself. I believe you will have to make it a "dynamic disk" to do this, using GPT rather than MBR.
2) NTFS is NOT a cluster filesystem. You cannot present a volume to 2 systems for R/W access simultaneously without making those systems into a cluster. Your best option here is to use a SAN-based snapshot on the MSA (I believe they come licensed to support this). Otherwise, you are stuck with the network-based backup for RDMs.
2a) If you use VMDKs on VMFS to make your drives, then you can still do snapshot-based backups direct from disk with VCB or another VMFS-/VMDK-aware tool. In that case, you won't have the storage overhead of the snapshot per-se, but you still have to take into account room for the VMDK snapshot on your VMFS filesystems. In this case, I would start with 5 2TB LUNs with 1.6TB VMDKs each. I would think this would provide sufficient room for snapshot disks, but you need to take your environment into consideration on this setting.
Happy virtualizing!
JP
Please consider awarding points to helpful or correct replies.
Grahm,
What I have done in the past for this same situtation was to create 500G, 1TB and 1.8TB LUN's. Then attach each lun inside windows as a mout points.
The problem might be the data. I was able to group thing in subdirs that were under the 500G size.
You can go up to 2TB per Lun but not recomenred as it does not give vmware breating room. (1.8TB recomended)
ASCII Art
500G -
| (root)
.........500G ----
.........500G ----
Dir example
Root (500G)
..........Marketing (500G)
..........Development (500G)
..................Source Code (1.8TB)
..................Bugs (500G)
..........Sales (500G)
...........etc.
The + here is if a volume goes missing ot corrupt you don't loose all the data.
Rosco
www.PHDVirtual.com makers of esXpress
OK guys. Thanks for all the advice. Its been very useful.
I have ended up splitting the raid into 1.5TB LUNs and presenting these to windows via ESX as RDMs. Then in windows I use dynamic disks to span the LUNS to create a single large volume.
I can disconnect the disks from ESX and present them to a raw windows box, not under esx and it is able to just pick the disks up without going anywhere near vmware. This satisfies me from a disaster recovery point of view when I may want to get data from the array in an emergency, but doesnt solve my multuple hosts accessing the same vole simultaneously. Looks like I'm stuck with pulling the data over the LAN for that one from the file server to the backup node.
Cheers again
Graham
If you'll be doing replication, you might take a look at DFS-R. I'm not sure what the impact would be using it on this scale, but it does provide replication only of changed blocks.
Happy virtualizing!
JP
Please award points for helpful and/or correct replies.