VMware Cloud Community
Gaprofittit
Contributor
Contributor

Cisco UCS vs HP C7000 for datacenter virtual environment

Hi All,

We are planning to rollout either Cisco UCS Chassis and Blades or go with HP C7000's.

Obviously both have pros and cons, can someone who is familiar with both give us
an honest opinion?  I'm sort of biased in one direction and others are biased in the
other direction.  I would like a fare accessment of both, where does one fit
where the other one doesn't, etc.  Please avoid bashing and negative comments, looking
for solid feedback.

Cost

Manageability/Ease of Management
Future scaling

Performance

etc

etc

Thanks,

Greg

Reply
0 Kudos
7 Replies
regnak
Hot Shot
Hot Shot

Hi,

Well, there's aren't many vendors jumping into the Blade market, but I took notice when Cisco said they were. One of the few companies I would trust in this area I think. I'd only seen HP Blades up to this and we worked with a client for over a year that went with UCS. I must say I'm very impressed. Yes, the Chassis bent in the middle until they got reinforcements put in but otherwise, the full width blades took 48 sticks of cheaper memory and were brilliant for what we had in mind. I particularly liked the management GUI which was rewritten as the usability wasn't great initially. Complex, yes, but they were able to take a fresh look rather than have to support a legacy model dragging behind them. They were ahead when it came to 10 Gig ethernet with a better QOS model than a hard coded version HP had to offer. I'd still use HP, love the variety of blades and they are both very good at what they do. If you already have a Cisco investment at the back end, it's very easy to keep down the Cisco route I think. A bit like VMware and VDI and other synergies. I see UCS as here to stay. I believe they ran last years VMWorld labs on them in the USA, not sure about this year. The firmware updates seem easier with UCS all packaged together etc. Now you're far enough away from launch you have missed the inevitable teething issues.

With HP - if you already have HP inhouse, it's a nice way to grow. Lots of Blade options as mentioned, and they take their Blade technology seriously. Just not terribly innovative in my opinion, having to hard code speeds on their Flex-10 configs isn't the way I want to carve up my bandwidth. Easy to manage, but I've not done a lot of deployments with them. They are very mature and they guarantee the Chassis for X years so you know your investment will last. Check for PCI compliance if this is a requirement.

Mike

Reply
0 Kudos
meistermn
Expert
Expert

For me it is clear no Cisco and HP.

A new approach must come.

1.) Storage  : Local versus SAN and cheap storage

    Most costly is storage in a vmware enviroment

    Looking at nexenta storage

http://www.virtual-strategy.com/2011/10/11/nexenta-creates-unprecedented-demand-openstorage-solution...

VMworld Las Vegas: NexentaStor’s superior performance in virtualized and cloud environments recently was demonstrated as it played a critical role in supporting the Hands-on Labs at the VMworld conference in Las Vegas in August. Nexenta was chosen, along with NetApp and EMC, to power this important and unique element of the conference, which further solidified NexentaStor as the ideal OpenStorage solution for virtualized environments. Key accomplishments at the Hands-on Labs included:
• Support for an innovative, cloud-hosted environment that enabled VMworld attendees to try VMware products in real world scenarios;
• Creation and destruction of more than 148,138 VMs
Achieving more than 1.3GBs per second sustained throughput from a single system;
Four NexentaStor-powered systems worked together to sustain more than one million IOPs served via a single NexentaS

http://www.nexenta.com

2.)  Network :    Server to Server communication could be 60 per cent

      Better would be a communication between server which functions in a hub an not switch way.

      The core , distribution and access topology as to many hops

       www.xsigo.com

     

       Haven't look at www.nextio.com or force10

3.) Google , Facebook, Amazon approach.

     Server and Storage in one box

      www.nutanix.com  ,

      VDI

      http://www.pivot3.com

      http://v3sys.com

Reply
0 Kudos
Casper42
Contributor
Contributor

I'll be perfectly upfront and honest, I work for HP and the c7000 and Virtual Connect is my bread and butter, so this is going to be a biased opinion but I will do my best not to outright bash my competition.

Something to keep in mind that I talk to customers about that was brought up by meistermn below is server to server bandwidth.

In a UCS platform ALL server to server communication has to go through the Fabric Interconnects.  So vMotion, FT and even Same VLAN different host VM traffic has to go through the same pipe.

This is because the UCS has the 2204 or 2208 modules that are essentially FEXs by another name that have no east west intelligence and they just route traffic north and south to the FI. A c7000 with VC modules can keep that traffic inside a single chassis or up to 4 chassis when "stacked" so that you need less bandwidth going up and out of the environment.  Cisco hates this because it means you need less Cisco ports upstream.

So keep in mind that your storage traffic, east-west traffic and all your VMs share the same pipe.

That brings up an issue that should be alleviated ocne they start shipping the 6296, which is FI aggregation/overcommitment.

If you want to have a large number of chassis under the FIs, you will have to deal with both aggregation issues at the UCS module layer (that FEX thing in the back of the chassis) and again leaving the 6200 to the rest of your environment.  I think with the 6296 they finally have enough ports not to make this as big a deal anymore, but I am not sure those are shipping just yet.  With 20 chassis hung off a pair of 6296, thats at least 2:1 oversubscription in each chassis (8 blades through 4 ports) and then 800Gbps squeezing into somewhere between 144 and 160Gbps leaving the FIs which is around 5:1 oversubscription.  So thats 10:1 total which means you get 1Gb dedicated to each blade?  Obviously thats an extreme corner case, but I will again go back and remind you that you have alot more North/South traffic to contend with.

Lastly is the all yer eggs in one basket approach worries me a little.

They have Storage, IP and Mgmt traffic all flowing through the same cable.  So when things take a dump, they are going to dump hard and you will have little or no visibility into what is happening right now.  Cisco will tell you they are doing you a favor by aggregating cables but I like the fact that our OA/VC/iLO are all completely out of band and always available.

On the HP Pro side I will just say 3 things.

We have been, and I think always will be, an open platform.  As already mentioned, the c7000 gives you alot of choices as far as interconnects.  Several different flavors of Ethernet (Including Catalyst and Nexus FEX, Ask Cisco when we will see a ProCurve switch in their design Smiley Happy ), Several different models of FC Switch/VC. and then oddballs used in corner cases like Infiniband, SAS switches, and I'm sure more to come.

Because of this open-ness, we have had to take a very modular approach in the past which makes things like Firmware harder to manage than a completely closed system like the UCS. (Think Android vs iPhone)

The SPP (Service Pack for ProLiant) is making strides to reduce the headaches there, and the latest version (basically anythign starting in 2012) has a new engine known as SUM 5 that is helping even more.  This is a single tool that can manage firmware, drivers and utilities across most OSes and includes the ability to upgrade the c7000/c3000 Enclosures as well as Virtual Connect modules.  All from 1 tool.

Lastly make sure you look at things from the DataCenter point of view.  We have something like 7 patents just on the fans in the c7000.  They operate on a high pressure low volume approach so we're not creating alot of pressure in your hot aisle causing it to bleed into the other areas of your DC and jack up the thermals.  I was walked through a customer DC not even a month ago where they put in 4 racks of UCS.  A month after turning it on, they had to go back to APC and get special hot aisle containment/chimney add-ons to their racks because the UCS gear had created a rather large hot spot in their datacenter and triggered a number of alarms.  Servers 2 rows away were suddenly seeing a large increase in ambient air temps.  Stand behind a rack of UCS Servers and a Rack of c7000 and you will find the c7000 puts out warmer air at lower volume than the UCS which puts out cooler air at higher volume.  This might seem liek I am rooting for the other team, but go ask a proper DataCenter Engineer what happens when the return air hes getting back is only 5 degrees F hotter than what hes pumping into the cold aisle.  Your CRACs will just love that.

I know I am late to respond, but if you did already make a decision, I would love to know which way you went and what the deciding factors were.

-Dan

Reply
0 Kudos
gravesg
Enthusiast
Enthusiast

I've worked with c7000 enclosures for nearly 5 years with virtual connect and just recently started looking at the UCS architecture with some basic kit on site.

I'm somewhat biased in the sense that I prefer the design methodology of the UCS converged environment with single point of management (FI) architecture and QOS/ intelligence upstream vs the back of the chassis where HP interconnects get extremely costly and difficult to get visibility into. Conversely its not until recently (1.5yrs) that the virtual connect firmware has become solid in my experience. It was the source of many frustrations in a top law firm production environment previously.

So if I were building a greenfield datacenter, I would go UCS. There is vision there that HP i believe is now trying to catch up in ther g8 line.

I also believe a scaled UCS system will save big (hardware) bucks, but if your are not that big a shop, the financial delta may not be worth it and the "newness" of FCoE and Cisco in the server game may not be warm and fuzzy enough for management to swallow. Overall both are great products so it may come down to who is willing to give your those free chassis's and training to "seed" your datacenter 😉

Reply
0 Kudos
PaulRiker
Contributor
Contributor

This is an interesting product debate. I'm a systems engineer, with 20+ yrs experience, pretty much have done it all, networks, firewalls, load balancers, storage EMC & Netapp, servers & Blades, and went through the whole .dotcom boom/bust.  Lately, I'm more of a "server & storage" guy for the last 10 yrs or so.  Anyways, in the past two years, I have been involved with the Cisco UCS product at two different large companies.  Both of them choose UCS for brand new "Greenfield" data centers.  In both cases, the final decision makers were "network guys", hence, we went with UCS. Both cases, the servers & storage team had a preference for HP.  I like to learn something new like the next guy, and UCS did/does offer some unique designs.  However, for both companies, the "secret sauce" is really VMware.  Vmware runs on just about anything, so the importance of the actual server hardware, what brand you use, assuming the systems are the same CPU brand (intel or amd), so in the end, doesn't really matter.  I will put this out there, as these are my own personal observations with both UCS and HP Blade systems.

UCS - with the 6120/6140 - you can control many chassis and control each blade, give each blade its own MACs and WWNs, and move that around from blade to blade. So if a blade were to go bad, you could just move its profile to another blade and it would take over and become the blade it is replacing. Yeah, this is nice, it would have been even nicer many years ago when everything was physical, but now, with each machine as a VM, this "feature" is a bit "ho hum"...  UCS will tell you this is the greatest feature, centralized management...  Guess what, it is also its biggest flaw!  Think about this.  What would happen if something happened to the "brains" of the UCS system, say maybe a power hit causes the system to go off-line or maybe a data center  gets too hot and causes their systems to go down?  It happened to me twice. Probably a 1% chance event, but it still happened.  In each case, UCS didn't come back properly, and it caused the system information for each blade to fail. Imagine 20 blades as ESX hosts with 10 VMs, and then the ESX hosts can't boot their boot luns or access their storage anymore because thier blade configurations are messed up.  Not a pretty picture, huh? 😞  Now, these are rare events, and maybe I've was snake bitten, but the fact their there could be a "ticking time bomb" with a bad firmware package in your UCS system, and you not know about it, then have the rare event take out your system and then the blades don't come back?  Not a pretty picture.  In the first case, we had a mixed IT shop, and had not made the switch fully to UCS.  Luckily, the HP blades retained their configuration, and storage connections and we could migrate VMs from the UCS systems to the HP systems, and not being fully in production helped.

So, for anyone that wants to move fully to UCS, I would HIGHLY caution them to at least maintain some stand alone server systems as fall back on, or by two UCS systems and keep them seperate, or run with a mixed setup with some other vendor.  That is, if you like to sleep easy at night.

The other benefits I do see with HP over UCS that I rarely see mentioned.  Let's face it, HP is a server company.  Their blades give you many more options over what UCS can deliver.  As  a technical engineer, one of my roles is to deliver "technical solutions" that best meets what the companies business directions are, and you can do that much better with HP.  I last worked for an engineering company.  Do you think having "workstation" blades to deliver superior 3D CAD is an advantage?  yes.  What about HPC, where having multiple cores is better, or have you seen the lastest VMmark benchmarks where the latest AMD systems kick some real butt. Advantage HP - UCS is Intel only.

What about the co-lo?   In our case, we can only have "x" amount of power dedicated per rack.  With this amount of power, we could either power 3 UCS blade systems, or 2 HP blade chassis.  Which one is better?  UCS 8 blades per chassis * 3 = 24 blades  HP chassis 16 blades * 2 = 32 blades - so, I'll take the 33% advantage in server blade density over UCS any day, when you have to pay for rack space.  To me this is a big advantage for HP.

Management - the UCS interface, I hate it, try to do something simple like setup notifications, do anything, it really isn't that clean of an interface to do anything with.  GUI's, though are "in the eye of the beholder", so while one person may not like the interface, I'm sure others will, I just don't like it.  Everything is a "service profile" and the way you have to "trick out" the interface to name stuff..  well , just not a fan.

So, end of the day, both systems run VMware just fine - I would personally though, rather run my VMware systems on HP hardware.

Reply
0 Kudos
meistermn
Expert
Expert

Take the no SAN approach. It is easier.

Take a look at the hole stack. The old stack x86 server , ethernet switches, san switches, san patchpanel , network patchpanel.

With converged solution you only need to devices.

Or read the gartner strategy 2013

Gartner Identifies the Top 10 Strategic Technology Trends for 2013

http://www.gartner.com/it/page.jsp?id=2209615

Integrated Ecosystems
The market is undergoing a shift to more integrated systems and ecosystems and away from loosely coupled heterogeneous approaches. Driving this trend is the user desire for lower cost, simplicity, and more assured security

Also ver helpful about old pod/block versus appliances

http://blogs.gartner.com/gunnar-berger/post-vmworld-thoughts-appliances-vs-the-rapid-desktop-program...

Advantage of appliances

utilzing local storage which tends to be faster (running on the same bus) cheaper (don’t need expensive SAN HDs) and scales better (every time I add an appliance I’m getting more IOPS). The appliance approach in one sense is still pod/block as its still storage/compute/network but it all happens within the singular appliance. The big catch though is that when you add an additional appliance you aren’t adding another separate bucket, you are increasing the size of your original bucket. This means every time you add an appliance EVERY user is affected positively

And in the one of the next releaes of vsphere you can see local storage used as shard storage

Overview of the vSAN technology, leveraging local storage inside vSphere nodes to prosent out as shared storage. Note the fact that the “distributed storage” is part of the actual hypervisor, much like the already existing distributed networking.

http://www.vmdamentals.com/?p=4204

Reply
0 Kudos
meistermn
Expert
Expert

Isn't cisco and hp outdated.

Now Version 3.0 of nutanix is coming out.

http://www.nutanix.com/launch/

Want to see DR Solution in a video

Reply
0 Kudos