1 2 3 Previous Next 36 Replies Latest reply on Dec 19, 2008 7:06 AM by virtek Go to original post
      • 30. Re: Physical Server Configurations
        LoneStarVAdmin Novice

        I agree with Ken - the level of redundancy in the blade chassis (backplane, power supply, network & fibre modules, OBA, and cooling) lends itself as a highly available alternative to rack mount servers, provided your SAN fabric and ethernet cabling was properly planned. We are running HP c7000s  with 14 active BL490c blades, dual XEON L5420, 32GB RAM, and no internal drives - of the 6 power supply units installed, two have never had to be utilized and the chassis operates at around 1/3 power. We tested disconnecting a FC and ethernet VC module and the failover to the other modules worked seamlessly.

        • 31. Re: Physical Server Configurations
          Ken.Cline Champion
          king@it.ibm.com wrote:

          I stated upfront I didn't want to bash HP with that (while I can say I laughed the first time I saw it months ago). I just brought it in for the purpose (and a data point) of the discussion.



          I have to agree w/Massimo here. We've known each other for a long time (and had good fun at each other's expense ) - but, in general, he's not one to play the vendor card.


          The SAN discussion is interesting but it's tricky. You are somewhat forced to buy a single SAN Vs two SANs simply because dealing with 2 SANs (for HA reasons) is not transparent and no simple by any mean. So while I agree that there are more potential outages with single SANs than there are with single chassis ..... most customers will have to accept that 1 SAN is good enough but the same customers might argue that, since they have an easy way out option with the single chassis "issue" ... they won't go with it (i.e. they will go with 2 x chassis or standard rack mounted servers).



          I think that a Cisco switch might be a better comparison. If you look at a 6500 series switch, there is a tremendous amount of redundancy built in to the chassis. Many customers are quite comfortable with only one of those beasts, but most "enterprise" customers will opt for two to provide redundancy. I think the same is true for the blade chassis. Most SMB or SME customers are going to be willing to accept the (minimal) risk associated with a single chassis. The customer who has a multi-million dollar IT budget is much more likely to deploy more than one - both because they are concerned about availability and because they simply have enough demand to require more than one to satisfy the workload.


          Ken Cline

          Technical Director, Virtualization

          Wells Landers

          TVAR Solutions, A Wells Landers Group Company

          VMware Communities User Moderator

          • 32. Re: Physical Server Configurations
            Ken.Cline Champion
            jacquesdp wrote:

            Yes what we plan to do is to have boot and storage volumes on the SAN.


            I am assuming that for VMotion to work we need to have at least the boot volumes there.


            Are you referring to the host boot volume? If so, it doesn't matter where it lives for VMotion. If it's the VM boot volume, then yes - it (and all other VM volumes) must reside on shared volume for VMotion to work.


            I am just not sure what will give the best performance (i.e. should we RAID 0+1 or RAID5 for boot?). The storage volume will vary according to application.



            In most cases, the performance difference isn't going to make much difference, particularly if it's simply a boot volume.


            Ken Cline

            Technical Director, Virtualization

            Wells Landers

            TVAR Solutions, A Wells Landers Group Company

            VMware Communities User Moderator

            • 33. Re: Physical Server Configurations
              mreferre Virtuoso





              thanks for the first part... good example on the second..






              • 34. Re: Physical Server Configurations
                Hot Shot


                I agree with Rodos.  I have also seen customers with 2 Blade Chassis in a C7000 6 Blades in each. An firmware issue affected all switch modules simultaneously instantly isolating all blades in the same chassis. Because they were the first 6 blades built it took down all 5 Primary HA agents. The VMs powered down and never powered back up. Because of this I recommend using two chassis and limiting cluster size to 8 nodes to ensure that the 5 primary nodes will never all reside on the same chassis. 



                My point is that blades are a good solution but require special planning and configuration to do right. . .



                • 35. Re: Physical Server Configurations
                  Rodos Expert

                  Virtek, great point!


                  You don't need to limit your cluster size, just reconfigure HA and it will spread the primary and secondaries out again. Its a really good point though, so good that I did a blog entry on it and quoted you, hope you don't mind.





                  Consider the use of the helpful or correct buttons to award points. Blog: http://rodos.haywood.org/

                  • 36. Re: Physical Server Configurations
                    Hot Shot


                    Thanks for the reference in the blog.  You are absolutely right there are several ways to eliminate this SPOF.  My actual recommendation to the customer was a a choice to either limit cluster size (eliminate potential for human error) or redistribute HA Primary nodes.



                    1 2 3 Previous Next