VMware Cloud Community
StephanS80
Contributor
Contributor
Jump to solution

vSphere/ESX 4.0 build-164009 and Infiniband

Hello,

I'am evaluating vSphere for our firm and I have a problem with one ESX machine and a Infiniband card. The Goal is to connect storage from JBOD's

via Infiniband to the ESX Hosts. I installed several ESX nodes for testing, then I shutdown one of these machines and inserted the IB card. The card

is visible via lspci. But the are no modules loaded on the ESX host for IB. Any hint's on installing/activating this card? Maybe I have to reinstall the ESX host?

Best regards,

Stephan

Infiniband Card Information:

0c:00.0 InfiniBand: Mellanox Technologies MT25208 InfiniHost III Ex (Tavor compatibility mode) (rev a0)

Subsystem: Mellanox Technologies MT25208 InfiniHost III Ex (Tavor compatibility mode)

Flags: bus master, fast devsel, latency 0, IRQ 11

Memory at dac00000 (64-bit, non-prefetchable)

Memory at d8000000 (64-bit, prefetchable)

Memory at d0000000 (64-bit, prefetchable)

Capabilities: Power Management version 2

Capabilities: Vital Product Data

Capabilities: Message Signalled Interrupts: 64bit+ Queue=0/5 Enable-

Capabilities: MSI-X: Enable+ Mask- TabSize=32

Capabilities: Express Endpoint IRQ 0

Reply
0 Kudos
1 Solution

Accepted Solutions
mlxali
Enthusiast
Enthusiast
Jump to solution

Mellanox has InfiniBand drivers for both ESX/ESXi 3.5/4.0

http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=36&menu_section=34#tab-two

These drivers include SCSI RDMA PROTOCOL (SRP) for storage over InfiniBand, and IP OVER INFINIBAND (IPoIB) for networking.

Check the Installation Guide for more details.

View solution in original post

Reply
0 Kudos
17 Replies
wolf
Enthusiast
Enthusiast
Jump to solution

Hi,

in 3.5 you had to install driver on ESX; driver was provided by Mellanox (see )

I do not think they will work in 4 (it's 64 bit, whilst 3.5 was 32 bit).

You can drop a line to mellanox. Please post your feedback here.

Reply
0 Kudos
infolink-denmar
Contributor
Contributor
Jump to solution

Any news on this, does infiniband work with vsphere ?

Reply
0 Kudos
wolf
Enthusiast
Enthusiast
Jump to solution

Hi,

I had a word from Mellanox: as far as I uderstood drivers should be provided by Voltaire (and should implement iSer besides SRP).

I cannot confirm any further sincee I tried to have a look at Voltaire (), but it seems that any download is restricted to their customers (they require HCA S/N).

If anybody had any info on this subject or can confirm Voltaire provides any SW, please post here.

Reply
0 Kudos
infolink-denmar
Contributor
Contributor
Jump to solution

Hi there.

not easy to find anything on this subject.. in the next week i will

test an supermicro system with infiniband onboard, the model is

supported by vmware.

http://www.supermicro.com/products/system/1U/6016/SYS-6016TT-IBQF.cfm

will let you know.

/martin

Reply
0 Kudos
wolf
Enthusiast
Enthusiast
Jump to solution

Hi,

we are running on Supermicro SYS-6015TWs since 18 months with no issues (infiniband enabled) but on 3.5.

Lack of Mellanox IB cards software and support for 4.0 is preventing us from upgrading from 3.5 to 4.0.

Please let me know if you get any result with 4.0.

Reply
0 Kudos
wolf
Enthusiast
Enthusiast
Jump to solution

Hi,

i did check again both the systems and i/o compatibility.

No infiniband card is available for Vsphere: this means that 6016 will work, but with no infiniband support (which means wasting a lot of $$$$ for nothing).

If you are going for 3.5 to get Infiniband, I strongly suggest to go for 6015TW: more memory support, less expensive; quad band infinibad switches are going to cost you a whack.

If you go for 4.0 and try to get IB up, when it is better 6016, since it is certified (but I do not think IB will work out of the box).

Keep us informed.

Reply
0 Kudos
wolf
Enthusiast
Enthusiast
Jump to solution

Those drivers are for ConnectX 10Gbit ethernet adapters, not for infiniband adapters, as far as it is from description.

Are U sure Infiniband adapter is seend correclty and works ?

Reply
0 Kudos
infolink-denmar
Contributor
Contributor
Jump to solution

yes it works. or it is visible. I still don't have a infiniband switch. I need to go buy one ASAP Smiley Wink

Reply
0 Kudos
wolf
Enthusiast
Enthusiast
Jump to solution

If you have 2 ports or 2 servers (i.e 2 IB network adapters), you can connect them point to point: this will work fine for RDMA (data access over infiniband), not for IP over infiniband (you need a switch).

The most economic switch out there is the Flextronics (available in managed on unmanaged version), SDR or DDR. I suggest the managed. Remember you need a subnet manager as well, that is included in some switches.

Please let us know your progress....

GOOD!!!!

Reply
0 Kudos
infolink-denmar
Contributor
Contributor
Jump to solution

Could you share some knowledge on my question on an setup with 4 ESX supermicro server, and one iSCSI server running open-e or NexentaStor storage software. all connected to an flextronics 8 port infiniband switch. what i'm looking to get out of this setup is a very fast storage system running iscsi and a high speed vmotion/backup network option on my vmware esx system. - questions about subnet manager ? are there one in the flextronics switch. can OpenSM be installed on the vsphere server. or do one have to buy an switch with build-in subnet manager. or will it work without ?

/martin

Reply
0 Kudos
wolf
Enthusiast
Enthusiast
Jump to solution

Well,

let's have the concepts in place first.

Infiniband is a low level general purpose signal transport technology, which is not suited to any specific task (i.e. it is not TCP/IP). On top of the infiniband signalling layer, drivers can layer stacks: IP is one example (IPoIB, IP over infiniband).

Mainly, for virtualization, there are 3 protocol families that are interesting: IPoIB (IP over infiniband, for networking), FCoIB (Fiber channel over Infiniband) and RDMA (Remote Direct Memory Access, used also for storage).

For IPoIB and FCoIB, if you need to go "outside" the Infiniband world, you need switches with gateway modules (IB-FC gateway, IB-Ethernet gateway) and they are fairly expensive.

So the "economic" way to use Infiniband now is to use it for IP or RDMA across the hosts; you normally use RDMA for storage access. RDMA (Remote Direct Memory Access) is an "abstract" layer, meaning is not suited to any specific task but to the general task purpose of accessing a remote machine memory.

Now, the most effective way to use Infiniband for high performance storage access, is to use RDMA+a storage protocol, which nowadays means either SRP (SCSI RDMA, natively supported by Mellanox drivers) or iSER (iSCSI over Infiniband).

To achieve this task, you need a storage capable of being a SRP or iSER target (SRP is better at this stage because is already there in the Mellanox drivers, I do not know if v.4 drivers suppor iSER as well).

You can "easily" build such a storage simply using a target server with an IB card and using Linux or OpenSolaris to build a SRP or iSER target (btw: solaris has released a SRP target mode within the ZFS/COMSTAR, see ).

Open-e is not the good way: many people and myself have been writing many times on open-e forums asking for SRP implementation, but open-e has been saying it is in the pipeline but they do not know when it will be released since 2 years (search open-e forums for Infiniband, for instance ).

The actual support to access open-e via infiniband is via iSCSI over IP over Infiniband, which is crap: stuff is encapsulated so many times that performance is lost.

So actual support to Infinibad in open-e is more theoretical than real: iSCSI over IP over IB means 40/80 MB/s read (which is normal for FC or DAS), SRP means 1600MB/s read (provided proper striping is made on disks, which become the real bottleneck).

I dot not know anything specific about Nexenta: it theory they use ZFS (which is OpenSolaris); so if they support COMSTAR and the SRP implementation on COMSTAR, it should be fine.

For subnet management opensm (Open Subnet Manager,[https://wiki.openfabrics.org/tiki-index.php?page=OpenSM]) is an open source project: it must be run on a host with IB fabric.

Hope to have clarified a bit.

Reply
0 Kudos
galtay
Contributor
Contributor
Jump to solution

A news says that Mellanox Tech. announced support for vSphere 4 (ESX 4.0) OFED Driver.

I did not examine yet for my infinihost 25208 chips but may works.

http://www.nchpc.org/2010/01/mellanox-announces-infiniband-ofed-driver-for-vmware-infrastructure-4/

Reply
0 Kudos
Merlin22
Contributor
Contributor
Jump to solution

Hallo, All

Has Anybody solve task of connection ESX 4 to storage over infiniband?

Reply
0 Kudos
mlxali
Enthusiast
Enthusiast
Jump to solution

Dear Merlin22,

To connect to remote storage over InfiniBand, you can either use SCRCI RDMA Protocol (SRP) or any other IP protocol over IPoIB (such as iSCSI + IPoIB).

Today, Mellanox supports both IPoIB and SRP for ESX 3.5 and ESX 4.0

Reply
0 Kudos
mlxali
Enthusiast
Enthusiast
Jump to solution

Mellanox has InfiniBand drivers for both ESX/ESXi 3.5/4.0

http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=36&menu_section=34#tab-two

These drivers include SCSI RDMA PROTOCOL (SRP) for storage over InfiniBand, and IP OVER INFINIBAND (IPoIB) for networking.

Check the Installation Guide for more details.

Reply
0 Kudos
wolf
Enthusiast
Enthusiast
Jump to solution

We developed a storage subsystem for native Infiniband and VMware.

With 16 SAS disks performance from within Virtual Machine is around 2000 MB/s (seq. write).

No cache, no SSDs.

Windows Server 2008 64 bit, 2 v-processors, boots up in 6 seconds (power on to logon screen).

Reply
0 Kudos