VMware {code} Community
skleman
Contributor
Contributor

10gbE bottlenecked at 1gbE regardless

10gbE throughout my network but I'm bottlenecked at 1gbE. 

Here's my scenario:

ESXi 6 host 10gb fiber to procurve 6600 switch

vCenter Server 6.5 on a VM

Storage on iSCSI via Open Media Vault with 10gbE fiber to Procurve 6600 switch (LSI 9750 RAID6 11x 4TB enterprise 7200RPM disks)

Backup appliance (may or may not be pertinent here but it's where I'm ultimately attempting to get the 10gb throughput so I'll mention it)

Iperf 2 proves 7gb+ through to an Ubuntu VM on the host to the backup appliance

Iperf proves 7gb+ from VM to VM

Iperf proves essentially all links are 10gb capable

Jumbo frames are enabled throughout.  VM, switch, vswitch, vkernel, storage, appliance, etc.

However, when I transfer a file, vMotion, SCP, Backup via nbd transport, EVERYTHING is maxed out at 1G.  There are NO 1G links in the mix.  iSCSI is bound to the 10G link.

Is there a license for 10G within VMware?  Is there a setting in the web client I haven't found?  (I'm new to the vsphere web client and can't stand the networking section)  Something has to be in play here that I haven't been privy to.

Thank you

0 Kudos
4 Replies
DeaconZ
Enthusiast
Enthusiast

I am running into the same issue. vSphere 6.0 Ent Plus on both HP and Cisco UCS, all using 10Gbe on the management network. We recently switched from Veeam (which does Hot Add backups at SAN speed 400MB/s or so) to a vendor called Rubrik. Rubrik does NBD backups only and we are getting capped at around 800mbps (100MB/s) on 10 gig links. Storage is >.5ms latency all flash array. On a host with 1gbps management link, its worse, only 1MB/s. Its almost like something is throttling it based on size of pipe.

0 Kudos
Anton_Lebedev_N
Contributor
Contributor

Have you ever figured out what is bottlenecking the NBD backup connection ? I have a similar problem in my environment where I use netbackup with nbd and it looks like i never get above the 100MB/sec throughput. Ive read that the vmkernel port limitation on the bandwidth is limited at a 100Mb for NFC connections and there is no way to get around it.

for reference im on esxi 6, using 25Gb mellanox as the uplinks, everything is set to jumbo frame, and my backup host is a RHEL7.4 with vmxnet3.

0 Kudos
daphnissov
Immortal
Immortal

If using NBD and especially on vSphere 6.0, you're always going to take a severe throughput penalty because of SSL being forced. This dramatically reduces the throughput of the connection since all your backup data has to traverse the ESXi kernel stack, which is horribly inefficient in that version. Using NetBackup–one of the worst backup applications ever–only compounds that problem. So there's really not much you can do there while on vSphere 6.0.

0 Kudos
al3b3d3v
Contributor
Contributor

daphnissov

Im not going to start a speculation about software a vs software b, but could anyone share more information on NBD in ESXi 6.0 and how it forces SSL and as well as some documentation regarding the kernel stack and its network performance ?

0 Kudos