-
1. Re: vSphere/ESX 4.0 build-164009 and Infiniband
wolf Jun 10, 2009 8:11 AM (in response to StephanS80)Hi,
in 3.5 you had to install driver on ESX; driver was provided by Mellanox (see http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=36&menu_section=34)
I do not think they will work in 4 (it's 64 bit, whilst 3.5 was 32 bit).
You can drop a line to mellanox. Please post your feedback here.
-
2. Re: vSphere/ESX 4.0 build-164009 and Infiniband
infolink-denmark Jul 26, 2009 11:09 AM (in response to StephanS80)Any news on this, does infiniband work with vsphere ?
-
3. Re: vSphere/ESX 4.0 build-164009 and Infiniband
wolf Jul 26, 2009 10:19 PM (in response to infolink-denmark)Hi,
I had a word from Mellanox: as far as I uderstood drivers should be provided by Voltaire (and should implement iSer besides SRP).
I cannot confirm any further sincee I tried to have a look at Voltaire (http://www.voltaire.com/SupportAndServices/Drivers), but it seems that any download is restricted to their customers (they require HCA S/N).
If anybody had any info on this subject or can confirm Voltaire provides any SW, please post here.
-
4. Re: vSphere/ESX 4.0 build-164009 and Infiniband
infolink-denmark Jul 27, 2009 5:10 AM (in response to wolf)Hi there.
not easy to find anything on this subject.. in the next week i will
test an supermicro system with infiniband onboard, the model is
supported by vmware.
http://www.supermicro.com/products/system/1U/6016/SYS-6016TT-IBQF.cfm
will let you know.
/martin
-
5. Re: vSphere/ESX 4.0 build-164009 and Infiniband
wolf Jul 27, 2009 5:14 AM (in response to infolink-denmark)Hi,
we are running on Supermicro SYS-6015TWs since 18 months with no issues (infiniband enabled) but on 3.5.
Lack of Mellanox IB cards software and support for 4.0 is preventing us from upgrading from 3.5 to 4.0.
Please let me know if you get any result with 4.0.
-
6. Re: vSphere/ESX 4.0 build-164009 and Infiniband
wolf Jul 27, 2009 5:29 AM (in response to infolink-denmark)Hi,
i did check again both the systems and i/o compatibility.
No infiniband card is available for Vsphere: this means that 6016 will work, but with no infiniband support (which means wasting a lot of $$$$ for nothing).
If you are going for 3.5 to get Infiniband, I strongly suggest to go for 6015TW: more memory support, less expensive; quad band infinibad switches are going to cost you a whack.
If you go for 4.0 and try to get IB up, when it is better 6016, since it is certified (but I do not think IB will work out of the box).
Keep us informed.
-
7. Re: vSphere/ESX 4.0 build-164009 and Infiniband
infolink-denmark Sep 7, 2009 2:59 AM (in response to wolf)http://www.mellanox.com/content/pages.php?pg=press_release_item&rec_id=344
installed
http://www.vmware.com/support/vsphere4/doc/drivercd/esx-net-mlx4-en_400.1.4.1.174-1.0.4.00000.html
Now i can see the infiniband hardware !!!
-
8. Re: vSphere/ESX 4.0 build-164009 and Infiniband
wolf Sep 7, 2009 3:04 AM (in response to infolink-denmark)Those drivers are for ConnectX 10Gbit ethernet adapters, not for infiniband adapters, as far as it is from description.
Are U sure Infiniband adapter is seend correclty and works ?
-
9. Re: vSphere/ESX 4.0 build-164009 and Infiniband
infolink-denmark Sep 7, 2009 3:33 AM (in response to wolf)yes it works. or it is visible. I still don't have a infiniband switch. I need to go buy one ASAP
-
Picture 7.png 24.8 K
-
-
10. Re: vSphere/ESX 4.0 build-164009 and Infiniband
wolf Sep 7, 2009 3:39 AM (in response to infolink-denmark)If you have 2 ports or 2 servers (i.e 2 IB network adapters), you can connect them point to point: this will work fine for RDMA (data access over infiniband), not for IP over infiniband (you need a switch).
The most economic switch out there is the Flextronics (available in managed on unmanaged version), SDR or DDR. I suggest the managed. Remember you need a subnet manager as well, that is included in some switches.
Please let us know your progress....
GOOD!!!!
-
11. Re: vSphere/ESX 4.0 build-164009 and Infiniband
infolink-denmark Sep 10, 2009 7:36 AM (in response to wolf)Could you share some knowledge on my question on an setup with 4 ESX supermicro server, and one iSCSI server running open-e or NexentaStor storage software. all connected to an flextronics 8 port infiniband switch. what i'm looking to get out of this setup is a very fast storage system running iscsi and a high speed vmotion/backup network option on my vmware esx system. - questions about subnet manager ? are there one in the flextronics switch. can OpenSM be installed on the vsphere server. or do one have to buy an switch with build-in subnet manager. or will it work without ?
/martin
-
12. Re: vSphere/ESX 4.0 build-164009 and Infiniband
wolf Sep 12, 2009 4:01 AM (in response to infolink-denmark)Well,
let's have the concepts in place first.
Infiniband is a low level general purpose signal transport technology, which is not suited to any specific task (i.e. it is not TCP/IP). On top of the infiniband signalling layer, drivers can layer stacks: IP is one example (IPoIB, IP over infiniband).
Mainly, for virtualization, there are 3 protocol families that are interesting: IPoIB (IP over infiniband, for networking), FCoIB (Fiber channel over Infiniband) and RDMA (Remote Direct Memory Access, used also for storage).
For IPoIB and FCoIB, if you need to go "outside" the Infiniband world, you need switches with gateway modules (IB-FC gateway, IB-Ethernet gateway) and they are fairly expensive.
So the "economic" way to use Infiniband now is to use it for IP or RDMA across the hosts; you normally use RDMA for storage access. RDMA (Remote Direct Memory Access) is an "abstract" layer, meaning is not suited to any specific task but to the general task purpose of accessing a remote machine memory.
Now, the most effective way to use Infiniband for high performance storage access, is to use RDMA+a storage protocol, which nowadays means either SRP (SCSI RDMA, natively supported by Mellanox drivers) or iSER (iSCSI over Infiniband).
To achieve this task, you need a storage capable of being a SRP or iSER target (SRP is better at this stage because is already there in the Mellanox drivers, I do not know if v.4 drivers suppor iSER as well).
You can "easily" build such a storage simply using a target server with an IB card and using Linux or OpenSolaris to build a SRP or iSER target (btw: solaris has released a SRP target mode within the ZFS/COMSTAR, see http://www.opensolaris.org/os/project/srp/SRP_TOI_1_0.pdf).
Open-e is not the good way: many people and myself have been writing many times on open-e forums asking for SRP implementation, but open-e has been saying it is in the pipeline but they do not know when it will be released since 2 years (search open-e forums for Infiniband, for instance http://forum.open-e.com/showthread.php?t=1341).
The actual support to access open-e via infiniband is via iSCSI over IP over Infiniband, which is crap: stuff is encapsulated so many times that performance is lost.
So actual support to Infinibad in open-e is more theoretical than real: iSCSI over IP over IB means 40/80 MB/s read (which is normal for FC or DAS), SRP means 1600MB/s read (provided proper striping is made on disks, which become the real bottleneck).
I dot not know anything specific about Nexenta: it theory they use ZFS (which is OpenSolaris); so if they support COMSTAR and the SRP implementation on COMSTAR, it should be fine.
For subnet management opensm (Open Subnet Manager,[https://wiki.openfabrics.org/tiki-index.php?page=OpenSM]) is an open source project: it must be run on a host with IB fabric.
Hope to have clarified a bit.
-
13. Re: vSphere/ESX 4.0 build-164009 and Infiniband
galtay Feb 17, 2010 2:27 AM (in response to wolf)A news says that Mellanox Tech. announced support for vSphere 4 (ESX 4.0) OFED Driver.
I did not examine yet for my infinihost 25208 chips but may works.
http://www.nchpc.org/2010/01/mellanox-announces-infiniband-ofed-driver-for-vmware-infrastructure-4/
-
14. Re: vSphere/ESX 4.0 build-164009 and Infiniband
Merlin22 Apr 18, 2010 1:50 PM (in response to StephanS80)Hallo, All
Has Anybody solve task of connection ESX 4 to storage over infiniband?