I have been reading the forum and am unclear on one question.
Is it possible given that ESX 301 does not yet do multipathing with NICS (Not HBA's) for me to use some available gig nics to direct the IO for certain VM's through them individually to improve throughput?
My Target has multiple nics and I would like to segregate the IO of a few vm's to thier own nic, both on the target and esx server.
The VMkernel software initiator does not do multipathing today, however, you can create a portgroup on a virtual switch with two or more physical NICs and use simple failover for redundancy.
You can also run the Microsoft iSCSI initiator within the virtual machine. In that case you can use two virtual NICs that go to redundant virtual switches. The MS iSCSI initiator does allow failover and load balancing (the latter not on Windows XP).
Right, but can I not segment the io from multiple vm's by making additional switches each with their own vmkernel/service/physical/vm nics? Does that make sense?
Is there a way to make more than one iscsi adapter in esx?
You can use the technique of additional switches, provided the switches are on different subnets, and you're using the vmkernel routing to do switch selection.
You can also have multiple NICs on the same vswitch and use IP hash spread outbound traffic across the multiple NICs, provided there are multiple IP addresses on the target storage system.
There is no way to make more than one SW iSCSI initiator in an ESX host.
Bare with me while I understand your post...
So in the first line I assume I need to make additional vmkernel/console nics in each switch?
In the second line, I assume I can just attach multiple physical nics to the one switch and enable IP hash?