tom12010
Enthusiast
Enthusiast

request help with validating SAN/network setup

Jump to solution

Hello, I am a new forums member, though I have worked with ESX for a while and I do as much reading/study as I can.

I am setting up a new HP MSA 2012i SAN and 2 VMware Enterprise hosts to use the SAN.

The SAN's block size is 64, and I know to set up vmdk's within VC/VIC, which should make the VMs' offset 64 and therefore properly align the VMs. 99% Windows Server VMs.

Our SAN has 2 controllers and several drives set up as 1 vdisk which will be divided into several LUNs.

1 management port per controller

2 ports for the iSCSI on each controller.

All are assigned static IPs within our 172.16.0.xxx subnet so they can all talk to the HP Procurve 2824 switch and so the hosts-to-be can eventually connect to the iSCSI LUNs.

My questions:

1. Should I set up VLANs on the switch for the SAN's management iSCSI controller Ethernet ports??

2. Within ESX, for its vSwitches, for Service Console and vKernel, can I use different subnets?? (e.g. 172.16.1.xxx for Service Console and 172.16.2.xxxx for vKernel, etc.)??

Or must I put the vSwitches into the same 172.16.0.xxx subnet??

3. I know that I create the LUNs within the MSA's web console, and then I point Virtual Center and hosts to see the LUNs. Can someone refer me to anything which may explain this in more detail??

Thank you!!

0 Kudos
1 Solution

Accepted Solutions
Lightbulb
Virtuoso
Virtuoso

Like they say Quick, good and cheap pick any two. That is just a fact of business. We often have to work with the hardware we have and do the best we can.

Go with the 3 VLANS

1 For VMs Management and everything else non-virtual 172.16.0.0/24. Use 2 of your ESX hosts nics so that there is failover capability.

2. Storage VLAN 1 172.16.1.0/24 1 pnic

3. Storage VLAN 2 172.16.2.0/24 1 pnic

This setup gives you redundancy for both storage and VM/SC networks, with the hardware you already have.

You can always expand latter if you get the funds.

View solution in original post

0 Kudos
22 Replies
Lightbulb
Virtuoso
Virtuoso

I would recommend a totally separate switched network for ISCSI traffic. This would enatail the purchase of another switch but it is worth it to toataly isolate storage traffic from SC and VM networking.

If not you can, and should, use VLANs on the switch to logically isolate the Storage network. For the storage VLAN you could use a whole other IP network (i.e. 192.168.1.5), since there will be no routing involved. You would setup a second SC portgroup on the Vmkernel switch used for ISCSI traffic, if you intend to use authentication (I am a little fuzzy about this so perhaps someone else will jump in)

Your SAN management ports could be on the 172.16.0 so you cold manage from LAN

So 2 x vswitchts

Hopefully 2 x separate physical networks but if you can not afford it 2 separate VLANS.

0 Kudos
Texiwill
Leadership
Leadership

Hello,

Welcome to the forums.

1. Should I set up VLANs on the switch for the SAN's management iSCSI controller Ethernet ports??

If you can use a dedicated pSwitch that would be best, but if not then yes.

2. Within ESX, for its vSwitches, for Service Console and vKernel, can I use different subnets?? (e.g. 172.16.1.xxx for Service Console and 172.16.2.xxxx for vKernel, etc.)??

Yes. You have to use different subnets. Remember your SC however must participate in the iSCSI network either directly (using a 2nd SC portgroup on a vSwitch) or via a gateway.

Or must I put the vSwitches into the same 172.16.0.xxx subnet??

You actually put the vmnics on different subnets not the actual vSwitch.

3. I know that I create the LUNs within the MSA's web console, and then I point Virtual Center and hosts to see the LUNs. Can someone refer me to anything which may explain this in more detail??

You present the LUNs to the vmkernel port for iSCSI, vCenter has not much to do with this. vCenter manages ESX and has nothing to do with SAN presentation.


Best regards,
Edward L. Haletky
VMware Communities User Moderator
====
Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.
Blue Gears and SearchVMware Pro Blogs -- Top Virtualization Security Links -- Virtualization Security Round Table Podcast

--
Edward L. Haletky
vExpert XIII: 2009-2021,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
Lightbulb
Virtuoso
Virtuoso

I have experience with MSA1000/1500 but not with the MSA2000 series, which is a completely different animal. If you can present the storage properly to the ESX hosts you are already most of the way home.

I would advise you to check the bugs pages on this Array there are a couple of issues that you should be aware of depending on which model you have

Assuming FC

http://h20000.www2.hp.com/bizsupport/TechSupport/SupportTaskIndex.jsp?lang=en&cc=us&taskId=110&prodS...

0 Kudos
tom12010
Enthusiast
Enthusiast

Assuming FC

http://h20000.www2.hp.com/bizsupport/TechSupport/SupportTaskIndex.jsp?lang=en&cc=us&taskId=110&prodS...

It is iSCSI, I apologize for not saying so.

I may eventually be able to get an 8-port HP 1700-series type switch for separating the traffic or 1800-series?? -- which would be better?? I know it doesn't have to be an expensive managed switch.

But I don't fully understand why an additional pSwitch is helpful, since even this switch must be connected to the other switches in my network so the VMs on the SAN can function and be used by people??

For starters it will be VLANs until I can get the additional switch, an 8-port switch will do.

I take it that the vSwitches in each host go onto my 172.16.0.xxx subnet?? I know that only SC and vmKernel get IP addresses and that VM Network doesn't need an IP.

BTW each host has 4 NICs, 2 internal and 2 on a NIC card.

I have to ask -- present the LUNs to the vmKernel port for iSCSI?? I suppose it will make sense once I get into it??

Having to ask like this is embarrassing. Smiley Sad

</blush>

I'll try to find more reading on SANs and LUNs and hosts, oh my!! Smiley Happy Smiley Happy

Networking is my weakness and there's not a lot out there that explicitly spells things out. I'll try vmware-land again now though.

I have about 3 weeks or less to figure all this out, along with other work to do. Smiley Happy Smiley Happy

Thank you, Tom

0 Kudos
HSpeirs
Enthusiast
Enthusiast

I did some testing with this array - nice peice of equipment.

>All are assigned static IPs within our 172.16.0.xxx subnet so they can all talk to the HP Procurve 2824 switch and so the hosts-to-be can eventually connect to the iSCSI LUNs.

Note that the two ports on each controller must be in different subnet, as per HP's documentation.

Ref: HP StorageWorks 2000 Family Modular Smart Array reference guide, Page 45:

IP Address – IP address for a specific port. The system uses port 0 of each

controller as one failover pair, and port 1 of each controller as a second failover

pair. Therefore, port 0 of each controller must be in the same subnet, and port 1

of each controller should be in a second subnet. For example:

■ Controller A port 0: 10.10.10.100

■ Controller A port 1: 10.11.10.120

■ Controller B port 0: 10.10.10.110

■ Controller B port 1: 10.11.10.130

So in testing the array, I set up two vSwitches on the ESX, each switch having a SC and a vKernel port for one of the subnets, and two hysical nics going into a vlan for that subnet. So you might do

172.16.0.1 - Port 0 Controller 0

172.16.0.2 - Port 0 Controller 1

172.16.0.100 - SC vSwitch1

172.16.0.101 - vKernel vSwitch1

172.16.1.1 - Port 1 Controller 0

172.16.1.2 - Port 1 Controller 1

172.16.1.100 - SC vSwitch2

172.16.1.101 - vKernel vSwitch2

H.

Lightbulb
Virtuoso
Virtuoso

In the MSA1000/1500 you would use SSP so to mask the LUNS (You did not have to but then if you wanted to use the storage at some futiure date for another host you had to down everything and turn SSP on) like I said have not touched the new series so do not know how things are done now. I do know that HP subed out to another hardware vendor for this device so it is not an development of procedding models.

8 port switchshould be good. You have 4 total ISCSI ports on controlers and you would probably want to have vmkernel vswitch on each host that uplinks to 2 pnics in failover configuration. 16 would be better as you have room to grow.

There is no need for anyone to talk to the Storage network except the ESX hosts (Unless you plan to use MSFT ISCSI software initiators from withing the VMs, but I don't think you will be doing that). This should be an isolated netrwork to prevent other network traffic impacting it's performace or vice a versa.

Try these on for size

http://itknowledgeexchange.techtarget.com/network-administrator/iscsi-in-vmware-esx-3/

0 Kudos
tom12010
Enthusiast
Enthusiast

■ Controller A port 0: 10.10.10.100

■ Controller A port 1: 10.11.10.120

■ Controller B port 0: 10.10.10.110

■ Controller B port 1: 10.11.10.130

So in testing the array, I set up two vSwitches on the ESX, each switch having a SC and a vKernel port for one of the subnets, and two hysical nics going into a vlan for that subnet. So you might do

172.16.0.1 - Port 0 Controller 0

172.16.0.2 - Port 0 Controller 1

172.16.0.100 - SC vSwitch1

172.16.0.101 - vKernel vSwitch1

172.16.1.1 - Port 1 Controller 0

172.16.1.2 - Port 1 Controller 1

172.16.1.100 - SC vSwitch2

172.16.1.101 - vKernel vSwitch2

Thank you for sending this, but where I am confused is why the 172.16.xxx.xxx subnets for the vSwitches and the different subnets inside each vSwitch for the port controllers though I think I understand the 10.xxx.xxx.xxx subnet for the HP part as stated herein...I can readily change what I have now. I've read different articles etc. about ESX networking but have not been able to concretely connect it to my own setup/experience.

Is the subnetting for the 4 sets of ports and vSwitches above only internal to ESX and has nothing to do with anything else so to speak??

I am planning 1 pNIC for SC, 1 pNIC for vmKernel, vMotion, iSCSI (< 10 hosts, not a lot of vMotion planned), 2 pNICs team for VM Network, according to Texiwill's blog article on what to do when one has 4 pNICs.

I'll try VLAN-ing the currrent switch but I don't think I need to plan for immediate switch growth because I have enough capacity now for the next 2 years or so in terms of hosts and storage. That is I won't be adding more hosts or more SANs.

Thank you...

0 Kudos
HSpeirs
Enthusiast
Enthusiast

Tom,

The 10.x.x.x addresses were from the HP Documentation, the 172.16.x.x address applied to your setup. The HP MSA2012i wants each pair of iSCSI ports (Controller 0 Port 0/Controller 1 Port 0 and Controller 0 Port1/Controller 1 Port 1) on different subnets - hence 172.16.0.x for one and 172.16.1.x for the other.

In setting up this way, you have two ESX vSwitches - one for the 172.16.0.x subnet, and one for the 172.16.1.x subnet. This is going to require you to have two of your physical nics being used for iScsi. In each of the vSwitches you have a vKernel, for the actual iSCSI traffic, and a SC, as you require one with the vKernel port. So:

pNics 0,1 - Teamed for vMachines

pNic 2 - iscsi, vmotion, sc

pNic 3 - iscsi, vmotion, sc

Ideally, you would add another pNIC, can just be 1 10/100 nic and use that for the management SC, and so keep the SC's on the iSCSI ports isolated.

H.

tom12010
Enthusiast
Enthusiast

Tom,

The 10.x.x.x addresses were from the HP Documentation, the 172.16.x.x address applied to your setup. The HP MSA2012i wants each pair of iSCSI ports (Controller 0 Port 0/Controller 1 Port 0 and Controller 0 Port1/Controller 1 Port 1) on different subnets - hence 172.16.0.x for one and 172.16.1.x for the other.

The rest of my LAN (all the other physical devices) are all in the 172.16.0.xxx subnet.

I gather I am safe using 172.16.1.xxx for one pair of ports and 172.16.2.xxx for the other pair of ports.

Thank you also for clarifying the pNIC setup, I am getting closer to understanding how it all comes together.

This forum is amazingly generous!!! Smiley Happy Smiley Happy

I will find out elsewhere about whether to get an additional 1700-series or 1800-series switch for dedicated iSCSI storage traffic.

Would this switch dedicated to iSCSI storage traffic be connected to our other network switches or not???

The extra 10/100 is a great idea, it should not be hard to find one that is on the HCL.

Thank you...

P.S. Points will come along soon...

0 Kudos
HSpeirs
Enthusiast
Enthusiast

Tom,

The rest of my LAN (all the other physical devices) are all in the 172.16.0.xxx subnet.

Ok, I misread your initial post, thought you were planning on the 172.16.0.x for the iSCSI. So yes, using 172.16.1.x and 172.16.2.x for the iSCSI will work fine. The physical switch that the iSCSI is patched to does not get connected to your other network switches. The idea is to have the iSCSI traffic and production lan traffic completely isolated from each other.

H.

0 Kudos
Texiwill
Leadership
Leadership

Hello,

You want your iSCSI data traffic to be segregated from the rest of the network for security purposes, but your SC still must participate in the iSCSI network.

CHeck out my Topology and iSCSI blog posts for assistance in setting up your virtual network with iSCSI.


Best regards,
Edward L. Haletky
VMware Communities User Moderator
====
Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.
Blue Gears and SearchVMware Pro Blogs -- Top Virtualization Security Links -- Virtualization Security Round Table Podcast

--
Edward L. Haletky
vExpert XIII: 2009-2021,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
tom12010
Enthusiast
Enthusiast

Texiwill wrote:

You want your iSCSI data traffic to be segregated from the rest of the network for security purposes, but your SC still must participate in the iSCSI network.Hello Texiwill,

Thank you for explaining.

CHeck out my Topology and iSCSI blog posts for assistance in setting up your virtual network with iSCSI.

I read your iSCSI article and your 4 pNICs article and concluded from the articles and this thread that my best setup will be the following per host:

1 pNIC, 1 vSwitch, 1 portgroup = Service Console

2 pNIC, 1 vSwitch = Storage (1 portgroup iSCSI, 1 portgroup iSCSI, 1 portgroup iSCSI vMotion, 1 portgroup iSCSI vmKernel -- do I have this right??) On one of my hosts if I connect an NFS to the network then this would be added to that host

2 pNIC, 1 vSwitch = VM Network

I will use HSpeirs' suggestions for IP addressing.

I'm ordering pNICs for the SC in each host, plus an HP 1800 8-port switch for the SAN traffic. If I am using a separate switch for the SAN traffic I assume I still need IP addressing somewhere for the port addressing as HSpeirs suggests. I guess I will have some trial and error getting correct the networking and subnetting etc.

Assumption: Only the SAN controllers (2 ports per controller) are connected to the additional standalone HP 1800. Other NICs are connected to our regular network switch, which could have VLANs for the SC NICs and the storage NICs and the VM Network NICs. Am I correct here?? The SAN management ports get connected to one of our regular network switches. Please correct me if I'm wrong.

This is an excellent thread, I appreciate everyone's advice, I hope that others will find it helpful too.

I do not know enough about CHAP to know whether I need to enable it, but if I do enable it, then people's comments here and your blog comments make it seem as though I need to also put the SC into the Storage vSwitches???????

Our cluster is quite small -- 2-3 hosts and presently &lt;10 VMs and I don't yet expect too-rapid growth (hopefully!!).

Thank you...

Message edited by tom12010, corrected comment about IP addressing and added assumption statement

0 Kudos
Texiwill
Leadership
Leadership

Hello,

I read your iSCSI article and your 4 pNICs article and concluded from the articles and this thread that my best setup will be the following per host:

1 pNIC, 1 vSwitch, 1 portgroup = Service Console

2 pNIC, 1 vSwitch = Storage (1 portgroup iSCSI, 1 portgroup iSCSI, 1 portgroup iSCSI vMotion, 1 portgroup iSCSI vmKernel -- do I have this right??) On one of my hosts if I connect an NFS to the network then this would be added to that host

1 pNIC I would assume.

The above implies that you will have subnets for SC, iSCSI, and VMotion OR VLANs for all three. VLANs would be better. pNIC0 is the backup for pNIC1 and pNIC1 is the backup for pNIC0.

2 pNIC, 1 vSwitch = VM Network

I do not know enough about CHAP to know whether I need to enable it, but if I do enable it, then people's comments here and your blog comments make it seem as though I need to also put the SC into the Storage vSwitches???????

Regardless of the use of CHAP or not, your SC must participate in the iSCSI network. While CHAP is not in use, it still sends requests over the SC (this is the nature of the software iSCSI stack). In this case your SC pNIC should either participate directly or indirectly via a gateway with the iSCSI network. This is the most confusing aspect of iSCSI deployments.

If you could add just 2 more pNIC you would have a much better implementation without overloading pNIC0 and pNIC1 and add more redundancy, performance, and security.


Best regards,
Edward L. Haletky
VMware Communities User Moderator
====
Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.
Blue Gears and SearchVMware Pro Blogs -- Top Virtualization Security Links -- Virtualization Security Round Table Podcast

--
Edward L. Haletky
vExpert XIII: 2009-2021,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
tom12010
Enthusiast
Enthusiast

Regardless of the use of CHAP or not, your SC must participate in the iSCSI network. While CHAP is not in use, it still sends requests over the SC (this is the nature of the software iSCSI stack). In this case your SC pNIC should either participate directly or indirectly via a gateway with the iSCSI network. This is the most confusing aspect of iSCSI deployments.

If you could add just 2 more pNIC you would have a much better implementation without overloading pNIC0 and pNIC1 and add more redundancy, performance, and security.

Money etc. considerations, I'll be fortunate to get the 1 single extra NIC and extra switch if I need it. This would correspond to your blog post about using five (5) pNICs.

Actually I have 2 HP switches in the rack. I could disconnect them so they don't talk to each other but then the 'regular network switch' (172.16.0.xxx subnet) will not have enough ports for everything...only 5 free ports on it now, while the other one is not yet being used...I know that I cannot put everything from the SAN and the hosts into a separate switch, the VMs will not be usable!!

I am getting confused with this and I don't want to be a burden by asking a million questions of people who have many other things to do etc.

Is there anything out there which gives any kind of specific examples at all??? Then I could determine if my assumptions about setup etc. are correct or not, such as which pNICs in the hosts should be connected to my regular network or to a separate switch etc. I am sure the answer is "It depends," sigh. Smiley Happy Smiley Happy

Thank you...

0 Kudos
tom12010
Enthusiast
Enthusiast

The above implies that you will have subnets for SC, iSCSI, and VMotion OR VLANs for all three. VLANs would be better. pNIC0 is the backup for pNIC1 and pNIC1 is the backup for pNIC0.I am still absorbing all this -- my head hurts!! Smiley Happy Smiley Happy Smiley Happy Smiley Happy -- what I have is two HP 2824's -- on the 2nd one (first one is full, has all our other physical servers) I will make VLANs for the iSCSI ports, SC, VMotion, etc. instead of a 3rd switch. With only 2 or 3 host and &lt;10 VMs there should not be excessive network traffic on the switch.

Tomorrow I will post some kind of diagram to illustrate what I think will work for me. I'll work first on getting it set up so the VMs can see the LUNs etc. then later adjust for security etc.

I've found a few diagrams on the net but none close to what I'm doing....

Thank you...

0 Kudos
Lightbulb
Virtuoso
Virtuoso

Like they say Quick, good and cheap pick any two. That is just a fact of business. We often have to work with the hardware we have and do the best we can.

Go with the 3 VLANS

1 For VMs Management and everything else non-virtual 172.16.0.0/24. Use 2 of your ESX hosts nics so that there is failover capability.

2. Storage VLAN 1 172.16.1.0/24 1 pnic

3. Storage VLAN 2 172.16.2.0/24 1 pnic

This setup gives you redundancy for both storage and VM/SC networks, with the hardware you already have.

You can always expand latter if you get the funds.

View solution in original post

0 Kudos
tom12010
Enthusiast
Enthusiast

Like they say Quick, good and cheap pick any two. That is just a fact of business. We often have to work with the hardware we have and do the best we can.

Go with the 3 VLANS

1 For VMs Management and everything else non-virtual 172.16.0.0/24. Use 2 of your ESX hosts nics so that there is failover capability.

2. Storage VLAN 1 172.16.1.0/24 1 pnic

3. Storage VLAN 2 172.16.2.0/24 1 pnic

This setup gives you redundancy for both storage and VM/SC networks, with the hardware you already have.

Thank you...I was thinking about this for the VMware networking part...that is why I'll try to make a diagram...

On the HP switch per se I'll VLAN the following:

1. The four ports used by the iSCSI Controllers A and B Port 0 and 1 -- 1 VLAN on the switch

2. One VLAN per host for Storage VLAN 1's pnic

3. One VLAN per host for Storage VLAN 2's pnic

4. One VLAN per host for the ports used by the SC pnic

5. No VLAN for the 2 pnics to which VM Network nnics are connected...

This way it looks like I only should have to buy 1 additional NIC per host for the SC traffic. Just an HP NC110T. Being as cost-conscious as possible to make it easier to have future requests granted. Smiley Happy Smiley Happy

Thank you...

P.S. I have not forgotten that points should be assigned, but there are not enough points to go around!!! Smiley Sad Smiley Sad Hopefully this thread will help other people.

P.P.S. I must remember to plan redundancy of the SC for HA to work properly...this may take some rethinking...!!! and re-reading Texiwill's blog on what to do with 5 pnics.

0 Kudos
Texiwill
Leadership
Leadership

Hello,

On the HP switch per se I'll VLAN the following:

1. The four ports used by the iSCSI Controllers A and B Port 0 and 1 -- 1 VLAN on the switch

2. One VLAN per host for Storage VLAN 1's pnic

3. One VLAN per host for Storage VLAN 2's pnic

4. One VLAN per host for the ports used by the SC pnic

5. No VLAN for the 2 pnics to which VM Network nnics are connected...

Yes this is the way to do it....

pNIC0 trunked SC VLAN, VMotion VLAN, iSCSI VLAN (only have SC/VMotion on this by default) backup for portgroup2 (iSCSI)

pNIC1 trunked SC VLAN, VMotion VLAN, iSCSI VLAN (only have iSCSI on this by default) backup for portgroup0 and portgroup1

You really do not want your VM's on the other networks using VLANs but people do this it is not secure, but with 4 pNICs w/iSCSI security suffers somewhere.


Best regards,
Edward L. Haletky
VMware Communities User Moderator
====
Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.
Blue Gears and SearchVMware Pro Blogs -- Top Virtualization Security Links -- Virtualization Security Round Table Podcast

--
Edward L. Haletky
vExpert XIII: 2009-2021,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
tom12010
Enthusiast
Enthusiast

Hello,

On the HP switch per se I'll VLAN the following:

1. The four ports used by the iSCSI Controllers A and B Port 0 and 1 -- 1 VLAN on the switch

2. One VLAN per host for Storage VLAN 1's pnic

3. One VLAN per host for Storage VLAN 2's pnic

4. One VLAN per host for the ports used by the SC pnic

5. No VLAN for the 2 pnics to which VM Network nnics are connected...

Yes this is the way to do it....

pNIC0 trunked SC VLAN, VMotion VLAN, iSCSI VLAN (only have SC/VMotion on this by default) backup for portgroup2 (iSCSI)

pNIC1 trunked SC VLAN, VMotion VLAN, iSCSI VLAN (only have iSCSI on this by default) backup for portgroup0 and portgroup1

You really do not want your VM's on the other networks using VLANs but people do this it is not secure, but with 4 pNICs w/iSCSI security suffers somewhere.

Thank you...My head is still spinning. Smiley Happy

Now I have to learn about trunking too. Smiley Happy Smiley Happy

Thank goodness for Google. Smiley Happy Smiley Happy

If I have a 3rd separate switch (an 1800) just for the iSCSI (the SAN) I would not need a VLAN for the 4 iSCSI controller ports (e.g. Port 0 and 1 on each controller) but I would still have potential security issues for not having enough separation of network traffic, correct?? Or could I put other pnics into that 3rd switch as well??

I hate to buy a 3rd switch because as soon as I virtualize at least 2 physical servers I no longer need the 2 current HP switches to be connected to each other any more for any reason.

I'm going to get at least 1-port NICs just for the SC, but I will see if I can get more 2-port NICs and use VLANs on our 2nd HP switch -- I bought it knowing I would need more switch ports but not knowing about the desirability of another switch for the iSCSI traffic as recommended...

Thank you, I'll try to post a diagram of some kind tomorrow...

0 Kudos