in a nutshell my current enviornment:
1 Intel Cluster of 5 hosts (core 2 and i7) for production, attached to production san
1 Intel Cluster of 2 hosts (core 2) for development, attached to development san
4 Standalone Intel hosts (core 2) some for production, some for development
6 Standalone AMD hosts (older opteron) for development
I am beginning the process of upgrading to vSphere and I've made the suggestion of pooling all of our Intel boxes into 1 cluster on the production san. We have enough capacity to retire our AMD boxes. I could use resource pools and vlans to keep production and development VMs separated. I am looking for any thoughts and comments on this plan. I think this might be a good idea but i'm curious to know if anyone thinks otherwise.
There is a tradeoff between a lot of nodes into a cluster and few (but enogh) nodes...
With too many nodes you have some capacity problems for admission control.
With too much the cluster could be a little slower.
Also too much nodes on the same LUNs could be not the best choice.
IMHO I suggest to use cluster with 4-6 nodes.
Can you describe what kind of admission control problems would arise? Have you had any experience with clusters greater than 6 nodes that makes you pick 4-6 as an optimum number? I know the max is 32 nodes per cluster, I wont be going anywhere near that number, I fully understand the issues that could arise from too much I/O on a lun and in fact this cluster would be using 3 to 4 different luns with no extents...
Can you describe what kind of admission control problems would arise?
Less nodes = more resources problem when you have a fail
Have you had any experience with clusters greater than 6 nodes that makes you pick 4-6 as an optimum number?
With ESX 3.5 a good number is 5 (the max number of primary nodes). On ESX 4.0 this number is not so important, but the I/O problem and the number of ESX that insist on the same LUN can give a max realistic number smaller than 32.
For more info on how HA works: