VMware Cloud Community
sccarlson
Contributor
Contributor

Vsphere 5, mounting VMDK on 192 machines Read-Only

Here's the scenario with some examples.

Vsphere 5

VXFS 5

VCE VBlock

VNX5500 from EMC

a bunch of Tier-1, Tier-2, Tier-3 Storage

Multiple Blade Chassis and Blades

We are building a web farm that shares code base for CGI behind apache.  Every node is the exact same

lets say its mounted on /mnt/CGI

Each node has its own logs.  Every node will write unique logs

lets say its mounted on /mnt/logs

CGI is about 100 GB in size.

I'd like to build a VMDK that is the contents of the 100GB of CGI and then mount read-only on all 192 hosts and this VMDK would be pinned to the tier-1 SSD storage.

Each 192 instantiations of the Guests (RHEL 5.6) would have:

/mnt/logs (unique filesystem / VMDK) RW

/mnt/CGI (Shared RO Filesystem / VMDK)

/ (unique filesystem / VMDK) RW

Has anyone ever accomplished this ?  Don't want to use NFS or anything else in this scenario.

Tags (4)
0 Kudos
5 Replies
mcowger
Immortal
Immortal

Unfortunately, this is not possible.

A given VMDK can be opened by a limited number of VMs (far less than 192).

Why isn't NFS an option?

--Matt VCDX #52 blog.cowger.us
sccarlson
Contributor
Contributor

Do you know what that limit is for the # of VM's that can mount an image?

We want to avoid NFS because we'd have to spin up a host to serve the NFS as our VCE implementation of the VNX5500 doesn't appear to support the NFS part.  Still looking into that piece to see if it's even on the table.

0 Kudos
mcowger
Immortal
Immortal

64 VMs is the current limit, but you are likely to hit another limit before then.  There is a VMFS limit preventing more than 8 hosts (not guests) from having a given file open at a time.  So you couldn't build a cluster larger than 8 hosts (which may or may not get you to 192 VMs).

However, when you say your VBlock doesn't support NFS...are you saying you purchased it without the data movers (possible) or that you think VCE wont let you use the NFS (which they will).

--Matt VCDX #52 blog.cowger.us
0 Kudos
sccarlson
Contributor
Contributor

To close on this issue.

a.  we didn't buy the NFS option with our VBLOCK.  Looking into that for the future.

b.  we have come to discover that all of our VM's must be unique due to unique logging characteristics within each of these directories we thought we could share.  Unfortunately, we'll have to design for a scenario where all of the VM's pre-load.

We're basically at the point where each VM is causing 200 IOPS of Load and we need to support 192 hosts concurrently.

200-250 spindles of 10k drives should handle that, we're just trying to figure out if we should change our load processes now, or if we can fit all of those 600gb drives in there.

Thanks for your help

0 Kudos
mcowger
Immortal
Immortal

Before you buy 250 spindles to support that workload, you might look into the cost of adding the datamovers and a couple SSDs for FASTcache.  I suspect it would be cheaper than 250 spindles, and would let you have that common NFS source directory (which would get elevated into the fastcache pretty quick) and a directory for each host to write logs to.

--Matt VCDX #52 blog.cowger.us
0 Kudos