VMware Cloud Community
bob1971
Contributor
Contributor
Jump to solution

Software iSCSI or Broadcom HBA w/TOE?

Loading ESXi 4.1 on 3 Dell R710's. ESXi is picking up the 1 gb Broadcom 5709's as hardware hba's, however nothing I've tried has allowed me to enable jumbo frames. When I set up the hba's with jumbo frames and do a rescan, it will not pick up any LUNs from the MD3000i. (The 3000i does detect the hba's and I've added them to the host in the iscsi manager...still no joy on the ESXi end. Also I have configured the ports on the 3000 to accept jumbo frames.) If I leave the mtu set for 1500, everything works fine under hardware iscsi.

When I use the software initiator with the same NIC's, I can enable jumbo frames. (The ESXi documentation does confirm that for the Broadcom, it doesn't support IPv6 and jumbo frames, but you can't blame a guy for trying.)

My question: What would give better performance? Should I use the hardware hba's without jumbo frames to reduce the iscsi overhead, or set it up for software iscsi with jumbo frames? As I understand it, the software iscsi has been greatly improved so I'm not really sure what way to go.

(I posted this question at the end of another thread and didn't get any bites so I figured I'd repost as a new question. I apologize for that.)

Reply
0 Kudos
1 Solution

Accepted Solutions
Andy_Banta
Hot Shot
Hot Shot
Jump to solution

As you discovered, you really can't change the MTU on the Broadcom HBAs.

Performance is always hard to answer. Obviously, the biggest advantage of the Broadcom adapters is improvement in CPU efficiency. There are advantages to that. Beyond that, realize that both the ESX iSCSI initiator and Broadcom initiator can saturate a 1GB link for most block sizes that are useful, even without jumbo frames.

So, if you're looking for CPU efficiency, for most block sizes, the Broadcom adapter is going to use less host CPU for the same throughput than jumbo frames with the ESX initiator.

If you're looking for pure throughput, both can fill a wire for larger block sizes, and that doesn't change much with the introduction of jumbo frames.

Andy

View solution in original post

Reply
0 Kudos
10 Replies
golddiggie
Champion
Champion
Jump to solution

You need to add the vSwitches, and port group, via command line/service console (either the unsupported mode, or the vMA, on ESXi) in order to get jumbo frames to work. The regular add networking wizard won't let you go with jumbo frames, no matter what the hardware allows. This is also true with ESX, it's just easier to do it since you don't need to either enable the unsupported SC, or pull in the vMA appliance...

Network Administrator

VMware VCP4

Consider awarding points for "helpful" and/or "correct" answers.

Reply
0 Kudos
Andy_Banta
Hot Shot
Hot Shot
Jump to solution

As you discovered, you really can't change the MTU on the Broadcom HBAs.

Performance is always hard to answer. Obviously, the biggest advantage of the Broadcom adapters is improvement in CPU efficiency. There are advantages to that. Beyond that, realize that both the ESX iSCSI initiator and Broadcom initiator can saturate a 1GB link for most block sizes that are useful, even without jumbo frames.

So, if you're looking for CPU efficiency, for most block sizes, the Broadcom adapter is going to use less host CPU for the same throughput than jumbo frames with the ESX initiator.

If you're looking for pure throughput, both can fill a wire for larger block sizes, and that doesn't change much with the introduction of jumbo frames.

Andy

Reply
0 Kudos
bob1971
Contributor
Contributor
Jump to solution

Andy, that's what I needed. If throughput is equal, I'll save the cpu cycles for VM's instead of iscsi overhead.

Gold--thanks for the response. I was using the rCLI to set up the original vswitch and vmknic with an MTU of 9000. The 5709 wouldn't work with jumbo frames enabeled so I had to delete the vswitch and set it back up with a default mtu then everything worked. I was just wondering if jumbo frames made that big of a difference to where I'd rather use software iscsi w/jumbo enabled vs. hardware and a default mtu.

Reply
0 Kudos
Tomek24VMWARE
Contributor
Contributor
Jump to solution

how did you configure your Broadcom HBA ??

Im tray do this on my 5709 nic and not working for me.

Reply
0 Kudos
bob1971
Contributor
Contributor
Jump to solution

I had checked out your other post and thought that we were having similar problems. Everything sets up fine. The SAN sees the iqn coming from ESX, but the lun's never pop up.

From what I can tellbased on your first postit looks like you've set up the vswitch and port group with jumbo frames (mtu=9000). After I figured out that jumbo frames weren't going to work for the broadcom 5709, I tried using the -m switch to set the MTU back to 1500. It still didn't work. Ultimately, I had to delete and recreate the vswitch/port group.

When you recreate it, if you're using the cli, leave off anything that sets the mtu to 9000.

Use the esxcli swiscsi command to attach the vmnic to the hba. Add the iscsi hardware ports to the Send Targets box. Do a rescan. Make sure the IQN is listed in your iscsi management software for your esx host. Rescan again and your LUN's should pop up.

That's basically how I got mine working.

Reply
0 Kudos
Tomek24VMWARE
Contributor
Contributor
Jump to solution

at which point you have added vmnic to vswitch? And add vmk to swiscsi using cli command. mayby sequence

of commands will be important?

Reply
0 Kudos
bob1971
Contributor
Contributor
Jump to solution

Here's the order I did the cli commands in:

Create the switch:

--vicfg-vswitch -a vSwitch1

Add uplink nic:

--vicfg-vswitch vSwitch1 -L vmnic5 (5 in my case...use the appropriate nic)

Create port groups:

--vicfg-vswitch vswitch1 -A <port group name>

Create vmkernel interfaces for iscsi traffic

--vicfg-vmknic -a -i <IP address for hba> -n <net mask> <port group name>

Attach vmk to hba

--esxcli swiscsi nic add -n vmkX -d vmhbaXX

After you attach the vmk to hba, you can then go into your hba and add your iscsi hardware ports to Send Targets (dynamic discovery). Rescan -- verify that your hba's iqn appear on your iscsi management software under the appropriate esx host -- rescan again and that should do it.

(Commands were ripped off from a Dell Tech Center article...very helpful.)

Reply
0 Kudos
Tomek24VMWARE
Contributor
Contributor
Jump to solution

Thank you

But this sequence does not work for me: (

I have this NIC: lspci

000:001:00.0 Network controller: Broadcom Corporation PowerEdge R710 BCM5709 Gigabit Ethernet

000:001:00.1 Network controller: Broadcom Corporation PowerEdge R710 BCM5709 Gigabit Ethernet

000:002:00.0 Network controller: Broadcom Corporation PowerEdge R710 BCM5709 Gigabit Ethernet

000:002:00.1 Network controller: Broadcom Corporation PowerEdge R710 BCM5709 Gigabit Ethernet

esxcli swiscsi nic list -d vmhba35

vmk4

pNic name: vmnic1

ipv4 address: 10.1.1.9

ipv4 net mask: 255.255.255.0

ipv6 addresses:

mac address: 00:26:b9:8a:ec:bf

mtu: 1500

toe: false

tso: true

tcp checksum: false

vlan: true

vlanId: 0

ports reserved: 63488~65536

link connected: true

ethernet speed: 1000

packets received: 20860

packets sent: 182

NIC driver: bnx2

driver version: 2.0.7d-2vmw

firmware version: 5.0.11 NCSI 2.0.5

Smiley Sad I do not know what to

do

Reply
0 Kudos
Andy_Banta
Hot Shot
Hot Shot
Jump to solution

You've correctly attached this vmknic to vmhba35.

Can you provide the output of esxcfg-scsidevs -a so we can see which iSCSI HBAs are which?

Is vmhba35 the ESX iSCSI initiator or a Broadcom adapter?

esxcli swiscsi vmknic list -d vmhba<#>

shows you the available vmknics for the adapter.

esxcli swiscsi vmnic list -d vmhba<#>

shows you the PNICs that can be used with the adapter.

Andy

Reply
0 Kudos
Tomek24VMWARE
Contributor
Contributor
Jump to solution

~ # esxcfg-nics -l

Name PCI Driver Link Speed Duplex MAC Address MTU Description

vmnic0 0000:01:00.00 bnx2 Up 1000Mbps Full 00:26:b9:8a:ec:bd 1500 Broadcom Corporation PowerEdge R710 BCM5709 Gigabit Ethernet

vmnic1 0000:01:00.01 bnx2 Up 1000Mbps Full 00:26:b9:8a:ec:bf 1500 Broadcom Corporation PowerEdge R710 BCM5709 Gigabit Ethernet

vmnic2 0000:02:00.00 bnx2 Up 1000Mbps Full 00:26:b9:8a:ec:c1 1500 Broadcom Corporation PowerEdge R710 BCM5709 Gigabit Ethernet

vmnic3 0000:02:00.01 bnx2 Up 1000Mbps Full 00:26:b9:8a:ec:c3 1500 Broadcom Corporation PowerEdge R710 BCM5709 Gigabit Ethernet

vmnic4 0000:07:00.00 igb Up 1000Mbps Full 00:1b:21:63:c9:90 1500 Intel Corporation 82576 Gigabit Network Connection

vmnic5 0000:07:00.01 igb Up 1000Mbps Full 00:1b:21:63:c9:91 1500 Intel Corporation 82576 Gigabit Network Connection

vmnic6 0000:08:00.00 igb Up 1000Mbps Full 00:1b:21:63:c9:94 1500 Intel Corporation 82576 Gigabit Network Connection

vmnic7 0000:08:00.01 igb Up 1000Mbps Full 00:1b:21:63:c9:95 1500 Intel Corporation 82576 Gigabit Network Connection

~ # esxcli swiscsi nic list -d vmhba35

vmk4

pNic name: vmnic1

ipv4 address: 10.1.1.9

ipv4 net mask: 255.255.255.0

ipv6 addresses:

mac address: 00:26:b9:8a:ec:bf

mtu: 1500

toe: false

tso: true

tcp checksum: false

vlan: true

vlanId: 0

ports reserved: 63488~65536

link connected: true

ethernet speed: 1000

packets received: 48223

packets sent: 182

NIC driver: bnx2

driver version: 2.0.7d-2vmw

firmware version: 5.0.11 NCSI 2.0.5

~ # esxcli swiscsi nic list -d vmhba36

vmk3

pNic name: vmnic2

ipv4 address: 10.1.1.11

ipv4 net mask: 255.255.255.0

ipv6 addresses:

mac address: 00:26:b9:8a:ec:c1

mtu: 1500

toe: false

tso: true

tcp checksum: false

vlan: true

vlanId: 0

ports reserved: 63488~65536

link connected: true

ethernet speed: 1000

packets received: 60655

packets sent: 97

NIC driver: bnx2

driver version: 2.0.7d-2vmw

firmware version: 5.0.11 NCSI 2.0.5

~ # esxcli swiscsi nic list -d vmhba34

vmk1

pNic name: vmnic0

ipv4 address: 10.1.1.10

ipv4 net mask: 255.255.255.0

ipv6 addresses:

mac address: 00:26:b9:8a:ec:bd

mtu: 1500

toe: false

tso: true

tcp checksum: false

vlan: true

vlanId: 0

ports reserved: 63488~65536

link connected: true

ethernet speed: 1000

packets received: 282816

packets sent: 556

NIC driver: bnx2

driver version: 2.0.7d-2vmw

firmware version: 5.0.11 NCSI 2.0.5

~ # esxcfg-scsidevs -a

vmhba38 ata_piix link-n/a sata.vmhba38 (0:0:31.2) Intel Corporation PowerEdge R710 SATA IDE Controller

vmhba39 iscsi_vmk online iscsi.vmhba39 iSCSI Software Adapter

vmhba0 ata_piix link-n/a sata.vmhba0 (0:0:31.2) Intel Corporation PowerEdge R710 SATA IDE Controller

vmhba1 megaraid_sas link-n/a unknown.vmhba1 (0:3:0.0) LSI Logic / Symbios Logic Dell PERC H700 Integrated

vmhba33 usb-storage link-n/a usb.vmhba33 () USB

vmhba34 bnx2i online iscsi.vmhba34 Broadcom iSCSI Adapter

vmhba35 bnx2i online iscsi.vmhba35 Broadcom iSCSI Adapter

vmhba36 bnx2i online iscsi.vmhba36 Broadcom iSCSI Adapter

vmhba37 bnx2i unbound iscsi.vmhba37 Broadcom iSCSI Adapter

~ # esxcli swiscsi vmknic list -d vmhba36

vmk3

vmknic name: vmk3

mac address: 00:50:56:77:01:d6

mac address settable: NO

~ # esxcli swiscsi vmknic list -d vmhba34

vmk1

vmknic name: vmk1

mac address: 00:50:56:7e:c7:b2

mac address settable: NO

~ # esxcli swiscsi vmknic list -d vmhba35

vmk4

vmknic name: vmk4

mac address: 00:50:56:7d:90:a2

mac address settable: NO

vmhba34,35,36,37 is a BroadCom Iscsi Adapter

When I connect vmknic0 or 1 or 2 or 3 to vmhba39 (Software ISCSI Adapter ) all is OK iscsi see LUN.

On Broadcom adapter not see LUN, but trying connect to matrix and closing session

Smiley Sad Did you enable somthing in BIOS for this Broadcom NIC?

You are have integrated NIC to motherbord or PCI NIC Expresss card ?

Reply
0 Kudos