VMware Cloud Community
ecleppe
Contributor
Contributor

HP C7000 Blades with virtual connect flex-10 high i/o

Hey all,

We've started with VMware back in the 2.x time, but times have changed.  Over the past months, we are having the impression that the way we used to set up things ain't working anymore as it should.


Our environment exists out of 2 HP C7000 enclosures spread over 2 data centers, each enclosure hosting 10 BL 490c G6 servers.

In each enclosure there are 2 virtual connect 4GB fibre modules and 2 flex-10 ethernet modules.

We used to virtualize, basically everything except AD and voice.  But it seems that even a simple SQL backup is saturating the fiber ports on our SAN switch now.  Probably due to the increased amount of servers and data.

A few of our SQL servers are big and heavily used (single database sizes up to 3.8 TB).

In total we run about 120 VM's on each enclosure all passing through those 2 virtual connect fibers.


I guess we are hitting a limitation here.

Should I bring those big SQL servers to physical machines or purchase DL 380 servers with ESXi on top and run them one on one so they have a dedicated path to the switch?

Or buy 2 additional VC fiber modules to spread the load?

To give you a short overview on what runs :

24 Sharepoint servers

40 SQL Servers

8 Exchange servers

40 Web servers

100 Random Applications : Wsus, SCCM, Navision, Rightfax, Test & Development servers

Any advice is welcome

Erik

0 Kudos
3 Replies
dconvery
Champion
Champion

That's tough to answer with the limited information you provided. I am assuming you are not saturating the bandwidth of the VC-FC modules. If that is the case, then there may be a bottleneck at the storage array.

What type of storage array are you using? Many time I find that there are not enough spindles to handle the IO or disks are configured to produce hot spots. It is common to buy storage based on size rather than IO. The other common mistake I see is that you buy starage based on IO, but then see available disk space and use it without regard to IO.

Most arrays have some sort of monitoring tool that will give you a report of performance statistics.

Dave Convery, VCDX-DCV #20 ** http://www.tech-tap.com ** http://twitter.com/dconvery ** "Careful. We don't want to learn from this." -Bill Watterson, "Calvin and Hobbes"
0 Kudos
Texiwill
Leadership
Leadership

Hello,

Before buying new hardware I would bring in a tool to analyze your fibre channel components. Perhaps something like:

VMware vCops 5

NetApp Balance (yes it works with other vendor's hardware)

Solarwinds STM

The list is pretty endless and some vendor specific but the ones I mentioned give pretty good results.

I would look at the following statistics:

* what does vSphere see as the performance to the FC (per host either viewed in vCenter or vCops 5)

* what does the Storage hardware say is the performance (Balance or Solarwinds STM)

* Include the storage hardware the VC-FC modules which you may be able to do using SNMP or direct queries into those tools.

If vSphere ses one thing and the hardware sees something else, then it could be the HBA or an overloaded host... In this case, check the standard tools but I personally like how vCops 5 reports the data.

I no longer use VC-FC interconnects so cannot tell how these tool connect anymore.

What storage you have and how it is laid out is also crucial as dconvery has stated.

Best regards,
Edward L. Haletky
VMware Communities User Moderator, VMware vExpert 2009, 2010, 2011

Author of the books 'VMWare ESX and ESXi in the Enterprise: Planning Deployment Virtualization Servers', Copyright 2011 Pearson Education. 'VMware vSphere and Virtual Infrastructure Security: Securing the Virtual Environment', Copyright 2009 Pearson Education.
vSphere Upgrade Saga -- Virtualization Security Round Table Podcast

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
gravesg
Enthusiast
Enthusiast

Wait, how did you identify that the problem was FC throughput? ESXTOP? Monitoring the Front-END FC ports on your SAN or FC switch? How much connectivity (uplinks) to SAN switches do you have?

The VC FC works with NPIV/NPV and unless your port channeling your FC uplinks, the links do some basic algorithms to select what host uses what uplink. In some case during reboots and such, the balancing may not even be optimized.

Visibility in a HPVC environment can be hard, which is why I went with MDS switch interconnects for the c7000, but in any case lots of endpoints to check here so this might be the time to engage your storage vendor for one of those "bi-annual" health assessments.

0 Kudos