VMware Horizon Community
Hypnotoad
Contributor
Contributor

View Architecture Questions

I'm playing around with View Premium 4.6 in the lab. I had a design in my head but recently went to a VMUG event and now I'm confused.

For servers, I have HP DL385 G7 with dual procs (12 cores each) and 128GB DRAM. I have two of these servers. I was planning to build them up in a dedicated View cluster with shared storage on my Compellent FC SAN. The speaker at the VMUG event discussed having local storage for the parent replicas and the composed machines and using shared storage only for the actual parent/snapshot. He also mensioned using SSD drives in each host. I can't do that at this time but it's an interesting idea.

My workstations will be Win7 Enterprise 32bit with GB ran and single proc. Apps are general office apps plus one special database app for healthcare records.

Should I use shared storage for machines?

How many machines per host should I be able to run? See specs above.

I use ESXi 4.1. Which edition of ESXi Should I use?

Any tips to optimize my environment would be apreaciated. I want to start a pilot rollout at the end of the month.

--Patrick

0 Kudos
8 Replies
admin
Immortal
Immortal

Patrick,

Local storage is faster, so I can understand why they would recommend putting some things there (especially on SSD's).

However, if you have a sizable amount of VM's, you may need to offload the clones onto shared storage.  Each LUN also gets a replica, so you can't keep those only on local storage if you move clones to shared storage.

I guess in my mind, in order to use local storage only, you would have to have lots of capacity.  Even then, I don't think you can use DRS to migrate the clones from one host to another without shared storage (i.e. vMotion).

Generally the rule of thumb is 8 Vm's per core.

I'd like to hear what others think....Drew

0 Kudos
idle-jam
Immortal
Immortal

also can you share with us how many users do you plan to host? with that you can gauge if local storage is needed (SSD for the IOPS) or a shared storage will be suffice. this is a good site to calculate the IOPS, http://myvirtualcloud.net/?page_id=1076

0 Kudos
Hypnotoad
Contributor
Contributor

I have 210 View licenses for now but plan to grow to about 550. I'll add hosts to the cluster as I grow. So if I go with 7 guests per core I would be at 168 guests per host. 128GB of DRAM should probably work fine. I'm thinking that I'll try to plan for about 100-125 guests per host and add hosts as nessesary. My SAN has a LOT of capacity (both space and IOPs) and all of my hosts have redundant FC HBAs.

I've been allocating storage groups in 500GB chunks for this and ballancing the guests across them. Does this make sense? I noticed that I end up with one parent replica per volume when I do this. I'm keeping these, and the guests, on my faster 15K spindles. I'd love to add some SSD disks to my SAN.

--Patrick

0 Kudos
mittim12
Immortal
Immortal

The biggest gotcha with the local storage is going to be availability.    If you are using the local storage for replica and clones then you can not take advantage of HA which means if you lose a server those machines will not fail over. 

1:   Shared storage is more flexible but local storage can be cheaper.  I say if you do not have a high tolerance for failure then use shared storage.   If you have a SSD drives in your SAN you can utilize View storage tiering to place the replicas on this LUN.  You can then have all the clones on a lower tier storage.

2:   Milage varies for how much you can consildate your environment.   I think you will run out of RAM before you do CPU.

3:  I think you may need enterprise plus or advanced to see all 12 cores. 

0 Kudos
gunnarb
Expert
Expert

I've heard this speech more and more over the years.  Originally it was put everything in central storage (buy a SAN) and now you are hearing the reverse.  I think there are multiple reasons you are hearing this, some have been mentioned.  When you put an SSD in your local ESX host and drop your replica there, you will get better performance, after all a local bus is always going to be faster than any Ethernet/FC BUS.  So for performance, absolutely this make sense.  However, lets not forget why we moved all that data into the Centralized SAN in the first place.   Security/Control/Management/etc.

The main reason I think you are hearing a push for local storage, is that for optimum VM performance you need a decent SAN with quite a few IOPS, and this isn't cheap.  You already have a good SAN, so I wouldn't change a thing.  (Well I might take advantage of an SSD in the host to drop the SWAP stuff too, as that's just a waste to run that over a network and save on a SAN).  Anyway, the push is money related.  As more companies look to move to VDI, VMWare (hell Citrix) need to have a solution that doesn't require an expensive SAN.  So the messaging changed.  Now its no longer what you'd hear if you were designing a vSphere environment, its still has a little of that vibe, but its changing to cheaper storage, and local storage is as cheap as it gets.  It does means you'll have more replicas out there, and of course you can't svMotion those replicas because they aren't on a SAN, and any other advantage of a SAN you lose (3rd party snapshot utilities, data-dedup, ext.)

Basically, IMO its a money thing.  Yes, there are legitimate performance reasons for doing this so VMWare, Vendors, Consultants can recommend this without looking bad.  However, performance comes at the cost of losing those extra management features you have on that Compellant.  But, if you don't need a SAN that makes View available to a much wider audience.  In fact, I expect you'll see more features that support View running in an environment without a SAN.  If they can give us the features of a SAN without the need of a SAN, then all of a sudden you'll see a complete reversal and everything will go to cheaper, faster, better... maybe, Local Storage.

Gunnar Berger

(Just speaking for myself here, not on behalf of my company)

Gunnar Berger http://www.gunnarberger.com http://www.endusercomputing.com
0 Kudos
admin
Immortal
Immortal

Hypnotoad,

What model SAN is it?  How many drives?  What is the capacity / spindle speed? (10K/15k)

Is it 8Gb FC or 4Gb FC?

Drew

0 Kudos
Hypnotoad
Contributor
Contributor

It is a Compellent SAN with dual series 40 controllers. Everything is fiber connected with 4GB HBAs. My SAN has 32 spindles of 15k disk, 16 a spindles of 10k disk, and 40 spindles of 7.2k disk.

Patrick

0 Kudos
gunnarb
Expert
Expert

Patrick,

I highly recommend you take a gander at this document:

http://www.vmware.com/files/pdf/resources/vmware-view-reference-architecture.pdf

What you really need to read up on is Pod/Block design.  Long story short, VMWare best practices are a max of 128 VMs per host.  With 12 cores, I expect this will be easy enough for you to do, so long as you have a ton of RAM.  8 VMs per core is safe, 12 is completely do able.  It really depends on what your users are doing.  Eventually you'll be able to get the APEX 2800 (Server Offload Card) and with that in your server, you'll have no issue hitting the 128 max recommended design.

Also, there are a bunch of guides that talk about optimization of the VM.  Its in the Admin guide see pages 56-57:

https://www.vmware.com/pdf/view-46-administration.pdf

Its also in the WIndows 7 Optimization Guide.

http://www.vmware.com/files/pdf/VMware-View-OptimizationGuideWindows7-EN.pdf

On the Storage side, I would hope 500GB is high as you should be using linked clones and refreshing often.  If this is the case, you could easily have your 210 VMs take maybe 250-300GB.  This could actually be much lower if you set the SWAP file to be on the local storage (50-100GB).  In environments I design, I recommend weekly refreshes, in these environments its rare that a VM increases by much in a week (sub 1G).  The longer you go between refreshes the larger you'll need to make the LUN.  So I guess what I'm saying is that you shouldn't need a 500GB LUN, but you might need a 100GB LUN running across 40 spindles, if you have them.  In View design you don't need much storage, you just need a ton of IOPS.  This is why local storage starts to make a lot of sense, a single SSD give you a 30x increase in IOPS.  Also, I wouldn't even bother with RAID and such, I actually purpose build servers to be single points of failure, because it saves a ton on money, and I depend on VMWare to be my redundancy.  But that's just me being a cheap bastard, it never hurts to add some redundacy into a server... I just think DRS/HA is good enough as the only moving part in my servers these days is the fan.

Gunnar Berger http://www.gunnarberger.com http://www.endusercomputing.com
0 Kudos