LucasAlbers
Expert
Expert

access more than 2 TB as one volume using dell hardware.

Jump to solution

I am familiar with the maximum datastore and vmdk limits, 2TB or so.

We currently run a linux nfs server that we would like to have access more than 2TB on one volume.

This currently runs on a dell r710 and we would like to upgrade the available storage on it with a disk shelf.

Our working dataset for this system is expected to between 10TB and 20TB, it runs continual batch jobs that take a few month's to complete

What disk shelf can we get that will not force us to split it into a ton of 2TB datastores?

If we use directo io or a hba card will this allow the vm to see larger than 2TB space?

0 Kudos
1 Solution

Accepted Solutions
Rumple
Virtuoso
Virtuoso

LVM's in linux are just like Windows Dynamic disks.

The big problem is that should one volume get corrupted, you've just lost the entire 24GB...

The backup issue also needs to be considered. Most larger backup solutions like TSM, HP Data Protector,etc will stream multiple backup threads across multiple volumes, but only a single backup thread can run against a single volume...

So, 4-6TB volumes would have at least 4x the backup throughput then a single 24TB volume...

View solution in original post

0 Kudos
7 Replies
Rumple
Virtuoso
Virtuoso

the vmfs volume is what has the limitation on size, not the server itself.

If you want to connect the storage right to a VM using the microsoft iscsi initator you will not hit the 2TB volume limit, but that requires the storage on another system

You can also (although not recommended) take multiple 2TB volumes and join them together to form a single vmfs volume of 64TB...

NFS storage itself does not have the 2TB limit either, but the size of your individual vmdk disk files are still limited...although again, you could then use dynamic disks in windows to stripe across all the multiple vmdk's to get around it...just again, not a highly recommended solution..but workable...

0 Kudos
AndreTheGiant
Immortal
Immortal

As written, if you have an iSCSI solution the most simple way is use a software initiator to connect the LUN into your VM.

Another solution (personally I do not like it too much) is use more vmdk and then "merge" them at guest level into a bigger volume (with LVM on Linux or with dynamic disks on Windows).

Andre

Andre | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
LucasAlbers
Expert
Expert

It appears I can create multiple 2 TB vmdk files and then mount them on a linux vm using lvm.

I did a test and was able to see a 4TB as one volume from a linux vm.

Does vmware reccomend against this this aproach?

Why is this a bad approach?

We are planning to create a 24 TB volume using this approach.

0 Kudos
golddiggie
Champion
Champion

Using twelve 2TB VMDK's to get a 24TB volume would (in my opinion) impose much more overhead than you would like/want... I think it would be a much better idea to use the, already mentioned, iSCSI volume mount option. For one thing, you can get an array that has the entire space you need on it (with spare spindles) and make sure it's performance is where you need it to be (or better)... You could also use several chassis (if you get some EqualLogic arrays), group them together, and gain even more. These are gains you won't get by simply slapping a dozen volumes together inside a VM...

I would also worry about corruption of data using the dozen volumes via the LVM mode... You'll want to have some kind of backup scheme associated with this information as well, so you need to consider that too. Unless the data has zero value to you at any point during the process and you can start the entire thing over if it hits 99% completed only to gak... Personally, I wouldn't take that kind of risk...

VMware VCP4

Consider awarding points for "helpful" and/or "correct" answers.

Hosted Systems Engineer IV (VMware environment)
Brewing beer again!
Rumple
Virtuoso
Virtuoso

LVM's in linux are just like Windows Dynamic disks.

The big problem is that should one volume get corrupted, you've just lost the entire 24GB...

The backup issue also needs to be considered. Most larger backup solutions like TSM, HP Data Protector,etc will stream multiple backup threads across multiple volumes, but only a single backup thread can run against a single volume...

So, 4-6TB volumes would have at least 4x the backup throughput then a single 24TB volume...

View solution in original post

0 Kudos
LucasAlbers
Expert
Expert

So we don't need backup's on this, as it is imported and generated from another location.

I do worry about getting one volume corrupted.

0 Kudos
Rumple
Virtuoso
Virtuoso

Personally I would get an isci based storage unit and have vm talk to it directly using iscsi initator (software or hardware passthrough)

Too many layers of software partitioning for my comfort

0 Kudos