VMware Cloud Community
zenomorph
Contributor
Contributor
Jump to solution

VMware vSphere iSCSI design

We're trying to design our VMware vSphere solution proposing a iSCSI SAN storage solution due to costs concern ideally we'd like FC SAN but cannot so teenhe next best solution is iSCSI.

I've been looking at the EMC AX4 iSCSI and HP Lefthand and MSA2000i solutions. One query or concern I have looking at some of these iSCSI is the "iSCSI Host connect" speed from the storage. For the EMC AX4 it has 2 * 1GB NIC, HP also have 2 * 1GB NIC while the HP Lefthand solution has 2 * 1GB and can upgrade to 10GB NIC.

What concerned is presume were going to run alot of VMs on the SAN will a 1GB NIC be adequate to support the throughput required for the VMs or with the newer 10GB NICs and vShpere support obviously that will be a better option but then taking into account the 10GB switch requirements and server NICs that will obviously bump up the price.

Anyone with experience on iSCSI and FC SAN experience provide some suggestions on throughput requirements.

Many thanks

Reply
0 Kudos
1 Solution

Accepted Solutions
TimPhillips
Enthusiast
Enthusiast
Jump to solution

Can`t say that StarWind is bad or poor software, I`m using it for long enought.

View solution in original post

Reply
0 Kudos
15 Replies
ConstantinV
Hot Shot
Hot Shot
Jump to solution

It depends... How many VMs you will run at least for now?

StarWind Software Developer

VCP 4/5, VCAP-DCD 5, VCAP-DCA 5, VCAP-CIA 5, vExpert 2012, 2013, 2014, 2015
Reply
0 Kudos
zenomorph
Contributor
Contributor
Jump to solution

Our plan is either to use the AX4 or MSA2000i or HP Lefthand P4000 presuming each SAN enclosure we have 12 * 136GB disks RAID5 that would be about 1TB per enclosure and if we have 2 enclosures for starters that would be about 2TB with maybe 10-15VMs.

What Im trying to get at is has anyone experienced performance or throughput issues with ISCSI because of the 1GB NIC limit on the SAN enclsoure?

Many thanks

Reply
0 Kudos
Josh26
Virtuoso
Virtuoso
Jump to solution

The exact nature of what you're doing with Lefthands will determine throughput a lot too.

In that synchronous mirroring of two iSCSI SANS (A Lefthand recommended install) is going to be slower than a traditional single SAN, since every write has to be mirrored. The quality of your switch is also likely to have a huge impact on throughput.

Reply
0 Kudos
zenomorph
Contributor
Contributor
Jump to solution

Hi Josh,

I'm sort of new to ISCSI SAN so not sure about the Lefthand mirroring recommendation - is there any information LINK u can provide about details on the mirroring requirement. I assumed we could just use it as a single SAN design solution.

If were going to use Lefthand then we'd most likely go for the 10GB option and for switch wise our standard is Cisco but we still haven't decided. At this point were still trying to deceide/justify whether to go AX4 FC or ISCSI versus HP MSA2000i or Lefthand P4500.

The only reason I can see us choosing the Lefthand solution is because it has higher NIC throughput 10GB which at this stage EMC AX4 doesn;t have yet - but we can't deceide if and how much that 10GB difference will make.

Many thanks

Reply
0 Kudos
Josh26
Virtuoso
Virtuoso
Jump to solution

Hi,

I don't have any specific links, but here is the way to think of the Lefthand solution: it's a software solution. You can see this in the fact you can just as easily buy the Lefthand VSA - which is just a VM appliance to turn your standard hardware into a SAN. The Lefthand SAN is just a DL180 with that software loaded.

The main reason you would go such a route, over a dedicated device, is that software has some very great option, which includes mirroring. Any salesman will sell you on that and try hard to get you buying two. It does have a hgue benefit - there's nothing remotely in this space that allows you to mirror your SAN to another box on the other side of the building, and achieve live failover.

You CAN of course go buy a single lefthand, but by the time you've paid for it, it's worth weighing up what you've gained against a low end server running Openfiler.

Reply
0 Kudos
TimPhillips
Enthusiast
Enthusiast
Jump to solution

In this case Josh got right: LeftHand is just soft, and it cost`s really lot of money. Can`t say that OpenFiler is also best solution: it has poor documentation, even manual for it costs 40 euro, I even don`t speak about support - 4200 per cluster! And that`s in euro and per year! For this money you can buy an Enterprise Server license for StarWind iSCSI and about 6 years of support (i used starwind as example, cause from Top3 - starwind, open-e and datacore only they have prices on site).

azn2kew
Champion
Champion
Jump to solution

You can't go wrong with any of the solutions you've checking, it just matter of price and your comfort level with implementation. Are you going to use hardware or software iSCSI? You get more performance if dedicated with hardware but cost more and VMware software iSCSI should suffice most environment. If this is a new design, i would look at long term strategic for scalability and push to 10GB if possible so you have more room and less disruptive when you're scale up. There are 10GB card like Neterion can control the bandwidth throughput nicely. There are too many solution for iSCSI and comparing the 3, EMC should flexible for you between iSCSI/FC selections but pricy. If you're using Lefthand's than you can check out SANMelody too or other free or cheap tools such as Starwind, StorMagic, Openfiler, FreeNAS.

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!

Regards,

Stefan Nguyen

VMware vExpert 2009

iGeek Systems Inc.

VMware, Citrix, Microsoft Consultant

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!! Regards, Stefan Nguyen VMware vExpert 2009 iGeek Systems Inc. VMware vExpert, VCP 3 & 4, VSP, VTSP, CCA, CCEA, CCNA, MCSA, EMCSE, EMCISA
TimPhillips
Enthusiast
Enthusiast
Jump to solution

Can`t say that StarWind is bad or poor software, I`m using it for long enought.

Reply
0 Kudos
zenomorph
Contributor
Contributor
Jump to solution

Thanks guys for your great help. Maybe I'll do a bit more research given our current budge I think we'll probably go for the MSA2000FC model but just not sure if we should use RAID5 or RAID10 given we'll have a mix of dBases and other utility non IO intensive VMs running.

Cheers

Reply
0 Kudos
whuber97
Enthusiast
Enthusiast
Jump to solution

zenomorph,

I can tell you that the AX4-5i is a great product. I've installed 5 of them in the past 5 months for small vSphere environments (2 to 4 hosts) and they've been great. It sounds like a great fit for your environment.

Just to clarify on the number of ports... the dual controller models of both the AX4-5 AND the MSA 2012i (I installed one of these in January, and I prefer the AX4) have 4x Gbe ports, not 2. Each controller has 2x ports. Using vSphere and the software initiator you can drive IO through all 4 Gbe ports provided that you have at least 2 luns (one assigned to each SP/controller) and are using a nmp round robin multipathing configuration.

if you have other questions surround the ax4 or the 2012i, let me know.

Will

vExpert 2012, 2013 | VCDX #81 | @huberw
Reply
0 Kudos
zenomorph
Contributor
Contributor
Jump to solution

Wil,

Thanks for the response actually after some consideration were thinking tending towards choosing an FC solution so I'm trying to get some more info. on the AX4-5 and MSA2012FC solution. How do these two compare against each other and how well they perform versus the CX3-80. We know the CX3-80 is another class up but how do they compare.

Many thanks

Reply
0 Kudos
whuber97
Enthusiast
Enthusiast
Jump to solution

zenomorph,

I have also installed solutions for your size using the FC AX4-5's, paired with two Brocade 300 switches and dual 4Gb fiber HBAs in each ESX host, and have had great results.

For an environment your size, I think the CX3-80 is a bit much. If you wanted to step into the CX series, I would recommend a CX3-10 or 20. Using a 10 or 20 gives you the flexibility of using EITHER 4Gb FC or iSCSI. The CX3-10 has 4x 4Gb fiber ports and 4x 1Gbe iSCSI ports, and the CX3-20 has 4x 4Gb fiber ports and 8x 1Gbe iSCSI ports.

It sounds like the AX4-5 would probably work best for you in terms of capabilities and budget - and it will scale to up to 60 drives (12x drives in the DPE and an additional 4x DAE's can be strapped on carrying 12 drives each). BUT, if you want to take that next step into the CX series to get the additional flexibility of being able to use multi-protocol (both FC and iSCSI), the CX3-10 or 20 would be your next choice. The CX3-10 will take the same number of drives as the AX4-5, but the CX3-20 will take up to 120 drives and also does support both iSCSI and FC. I believe the CX3-80 to be a bit much for what you need.

From the sounds of it, if it were me, and I were only using 10 to 12 VMs on this environment and don't plan on scaling from the base 12 drives to more than 60 drives in the next 4 or 5 years, I would go with the AX4 (either iSCSI or FC) - both sound like they would be appropriate for you. If you do decide to go with iSCSI, don't overlook the importance of getting quality dedicated network switches and quality NICs in your servers for the iSCSI transport.

Hope that helps, if you have any other questions let me know.

Will

vExpert 2012, 2013 | VCDX #81 | @huberw
Reply
0 Kudos
zenomorph
Contributor
Contributor
Jump to solution

Wil,

Thanks for your great help. Actually for our implementation we were thinking initially running around 28 VMs across 3 ESX Ent. attached to the AX4-5 requiring about 9TBs of storage using 300GB 15K SAS. Which would mean if we do RAID5 about 36 disks which leaves only about 24 disks remaining on the AX4 capacity with its 60 disk limit.

But this doesn't take into account the space that we'd require to use VCB snapshots, initially we were thinking of running nomal differential Windows backup on the VM on week days to reduce the amount of tapes we require. Tthen for the weekend backup we'd do the VMDK backup using VCB so we'd need some space, for that were guessing abound 3-4TBs which we may either run RAID10 SATA's or stick with the RAID5 15*300GB SAS. But presuming we take this approach we really don't have much space left for expansion.

Really our main concern is the performance of the SAN we will be running a combination of SQL, Notes and IIS servers on the AX4.

Cheers....

Reply
0 Kudos
whuber97
Enthusiast
Enthusiast
Jump to solution

Zenomorph,

For the VCB snapshots, you should consider installing high capacity SATA disks. The AX4 allows you to mix and match SAS and SATA disks in the same enclosure, so you could easilyi accomplish this. Use the 300GB 15K SAS disks for your production VMs, and higher capacity slower SATA disks (say 1TB disks) for the backup LUNs. That would give you plenty of storage and still room for growth.

If that is not an options, it sounds like the CX3 that would allow for 120 drives is your next option.

Hope that helps!

Will

vExpert 2012, 2013 | VCDX #81 | @huberw
Reply
0 Kudos
HyperViZor
Enthusiast
Enthusiast
Jump to solution

Hi zenomorph,

I have AX4-5i running with vSphere 4.0 perfectly well, I can't however provide you with any feedback with regards to the performance since it's running only in my Lab, so I have no production workloads. I'm happy with this array and thinks it's great, however, I need to drive your attention to one thing: this iSCSI array doesn't support VMware SRM, so if you are planning to implement the SRM in your environment down the line, you must take this limitation into account.

Good luck

Hany Michael

HyperViZor.com | The Deep Core of The Phenomena

Hany Michael
HyperViZor.com | The Deep Core of The Phenomena
Reply
0 Kudos