in 3.5 you had to install driver on ESX; driver was provided by Mellanox (see http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=36&menu_section=34)
I do not think they will work in 4 (it's 64 bit, whilst 3.5 was 32 bit).
You can drop a line to mellanox. Please post your feedback here.
Any news on this, does infiniband work with vsphere ?
I had a word from Mellanox: as far as I uderstood drivers should be provided by Voltaire (and should implement iSer besides SRP).
I cannot confirm any further sincee I tried to have a look at Voltaire (http://www.voltaire.com/SupportAndServices/Drivers), but it seems that any download is restricted to their customers (they require HCA S/N).
If anybody had any info on this subject or can confirm Voltaire provides any SW, please post here.
not easy to find anything on this subject.. in the next week i will
test an supermicro system with infiniband onboard, the model is
supported by vmware.
will let you know.
we are running on Supermicro SYS-6015TWs since 18 months with no issues (infiniband enabled) but on 3.5.
Lack of Mellanox IB cards software and support for 4.0 is preventing us from upgrading from 3.5 to 4.0.
Please let me know if you get any result with 4.0.
i did check again both the systems and i/o compatibility.
No infiniband card is available for Vsphere: this means that 6016 will work, but with no infiniband support (which means wasting a lot of $$$$ for nothing).
If you are going for 3.5 to get Infiniband, I strongly suggest to go for 6015TW: more memory support, less expensive; quad band infinibad switches are going to cost you a whack.
If you go for 4.0 and try to get IB up, when it is better 6016, since it is certified (but I do not think IB will work out of the box).
Keep us informed.
Now i can see the infiniband hardware !!!
Those drivers are for ConnectX 10Gbit ethernet adapters, not for infiniband adapters, as far as it is from description.
Are U sure Infiniband adapter is seend correclty and works ?
If you have 2 ports or 2 servers (i.e 2 IB network adapters), you can connect them point to point: this will work fine for RDMA (data access over infiniband), not for IP over infiniband (you need a switch).
The most economic switch out there is the Flextronics (available in managed on unmanaged version), SDR or DDR. I suggest the managed. Remember you need a subnet manager as well, that is included in some switches.
Please let us know your progress....
Could you share some knowledge on my question on an setup with 4 ESX supermicro server, and one iSCSI server running open-e or NexentaStor storage software. all connected to an flextronics 8 port infiniband switch. what i'm looking to get out of this setup is a very fast storage system running iscsi and a high speed vmotion/backup network option on my vmware esx system. - questions about subnet manager ? are there one in the flextronics switch. can OpenSM be installed on the vsphere server. or do one have to buy an switch with build-in subnet manager. or will it work without ?
let's have the concepts in place first.
Infiniband is a low level general purpose signal transport technology, which is not suited to any specific task (i.e. it is not TCP/IP). On top of the infiniband signalling layer, drivers can layer stacks: IP is one example (IPoIB, IP over infiniband).
Mainly, for virtualization, there are 3 protocol families that are interesting: IPoIB (IP over infiniband, for networking), FCoIB (Fiber channel over Infiniband) and RDMA (Remote Direct Memory Access, used also for storage).
For IPoIB and FCoIB, if you need to go "outside" the Infiniband world, you need switches with gateway modules (IB-FC gateway, IB-Ethernet gateway) and they are fairly expensive.
So the "economic" way to use Infiniband now is to use it for IP or RDMA across the hosts; you normally use RDMA for storage access. RDMA (Remote Direct Memory Access) is an "abstract" layer, meaning is not suited to any specific task but to the general task purpose of accessing a remote machine memory.
Now, the most effective way to use Infiniband for high performance storage access, is to use RDMA+a storage protocol, which nowadays means either SRP (SCSI RDMA, natively supported by Mellanox drivers) or iSER (iSCSI over Infiniband).
To achieve this task, you need a storage capable of being a SRP or iSER target (SRP is better at this stage because is already there in the Mellanox drivers, I do not know if v.4 drivers suppor iSER as well).
You can "easily" build such a storage simply using a target server with an IB card and using Linux or OpenSolaris to build a SRP or iSER target (btw: solaris has released a SRP target mode within the ZFS/COMSTAR, see http://www.opensolaris.org/os/project/srp/SRP_TOI_1_0.pdf).
Open-e is not the good way: many people and myself have been writing many times on open-e forums asking for SRP implementation, but open-e has been saying it is in the pipeline but they do not know when it will be released since 2 years (search open-e forums for Infiniband, for instance http://forum.open-e.com/showthread.php?t=1341).
The actual support to access open-e via infiniband is via iSCSI over IP over Infiniband, which is crap: stuff is encapsulated so many times that performance is lost.
So actual support to Infinibad in open-e is more theoretical than real: iSCSI over IP over IB means 40/80 MB/s read (which is normal for FC or DAS), SRP means 1600MB/s read (provided proper striping is made on disks, which become the real bottleneck).
I dot not know anything specific about Nexenta: it theory they use ZFS (which is OpenSolaris); so if they support COMSTAR and the SRP implementation on COMSTAR, it should be fine.
For subnet management opensm (Open Subnet Manager,[https://wiki.openfabrics.org/tiki-index.php?page=OpenSM]) is an open source project: it must be run on a host with IB fabric.
Hope to have clarified a bit.
A news says that Mellanox Tech. announced support for vSphere 4 (ESX 4.0) OFED Driver.
I did not examine yet for my infinihost 25208 chips but may works.
Has Anybody solve task of connection ESX 4 to storage over infiniband?