VMware Cloud Community
cariparo
Contributor
Contributor
Jump to solution

iSCSI and crossover cables

Hi All,

I have 3 phisical server each with esxi4 and one shared storage (win2008 storage box) with 4 gib nic's

I'm thinking to connect each server with a crossover cable to the storage and use iSCSI protocol to map the disks.

I read somewhere that crossover is not supported and it slow down the performance.

Does anyone have more info or experience about it please?

Thanks in advance

Carip

Tags (3)
1 Solution

Accepted Solutions
AlbertWT
Virtuoso
Virtuoso
Jump to solution

Hi Cariparo,

Of course you can do it, see the following diagram this is what I'm using now.

a direct connection from the SAN into the ESXi servers without anything in the middle, this can eliminate the single point of failure caused by the Switch and also provide redundancy.

each color in the iSCSI network is different IP subnet by itself. from the ESXi into the production network i put two cable connected to two different ports and then add those pNIC into same vSwitch for failover

hope this helps,

any comemnts and input would be greatly appreciated.

Kind Regards,

AWT

/* Please feel free to provide any comments or input you may have. */

View solution in original post

0 Kudos
10 Replies
DSTAVERT
Immortal
Immortal
Jump to solution

Set it up ant test. Get a switch and set that up to test. Compare.

-- David -- VMware Communities Moderator
0 Kudos
jfelinski
Enthusiast
Enthusiast
Jump to solution

Hi,

Since Gbe Ethernet cards has been released, you no longer need a crossover cable to connect them. Devices will negotiate on standard UTP 5 cable.

With configuration you've described, this will work, however you'll have one path from each host to the storage. From redundancy and multipathing point of view, this is not recommended.

---

MCSAS,CompTIA Security, VCP

--- MCSA+S, VCP 3, VCP 4, vExpert [url=http://wirtualizacja.wordpress.com]http://wirtualizacja.wordpress.com[/url]
cariparo
Contributor
Contributor
Jump to solution

Thanks,

I didnt think about multipath... i will setup a GB switch.

Carip

0 Kudos
AlbertWT
Virtuoso
Virtuoso
Jump to solution

Hi Cariparo,

Of course you can do it, see the following diagram this is what I'm using now.

a direct connection from the SAN into the ESXi servers without anything in the middle, this can eliminate the single point of failure caused by the Switch and also provide redundancy.

each color in the iSCSI network is different IP subnet by itself. from the ESXi into the production network i put two cable connected to two different ports and then add those pNIC into same vSwitch for failover

hope this helps,

any comemnts and input would be greatly appreciated.

Kind Regards,

AWT

/* Please feel free to provide any comments or input you may have. */
0 Kudos
cariparo
Contributor
Contributor
Jump to solution

Thanks Albert,

you are using what more or less is on my mind, with the difference that I want to virtualize DCs and Exchange also.

(but in some esx3 official forum I read that cross cables isnt supported...)

Question solved.

Now i'm investigating about new features of esxi iSCSI initiator.

I'm trying to discover if the "problems" described here:

http://virtualgeek.typepad.com/virtual_geek/2009/01/a-multivendor-post-to-help-our-mutual-iscsi-cust...

was solved in this new version.

Jumbo frames support in vmkernel seem solved, now I'm looking for info abt multiple sessions or connections, similar to Microsoft initiator, to best aggregathe the transfer rate. (each of my server have 4x1Gb card dedicatd st SAN)

If I discover more info I will update this thread.

Thanks,

Cariparo

cariparo
Contributor
Contributor
Jump to solution

Sorry, some internal server error cause multiple post

0 Kudos
cariparo
Contributor
Contributor
Jump to solution

Sorry, some internal server error cause multiple post

0 Kudos
cariparo
Contributor
Contributor
Jump to solution

0 Kudos
pie8ter
Contributor
Contributor
Jump to solution

We are looking at the setup like yours. We are planning to purchase about six Dell servers to replace our old servers and one MD3000i. Are there enough network ports in MD3000i to connect six servers?

We also have Backexec 12.0. Does it support backing up VMs?

Thanks

0 Kudos
AlbertWT
Virtuoso
Virtuoso
Jump to solution

Hi Pieter,

AFAIK, MD3000i can only have 2 GBNIC on each of the Storage Controllers so maximum is two.

without the switch it can only support two hosts.

/* Please feel free to provide any comments or input you may have. */
0 Kudos