VMware Cloud Community
Bill_Morton
Contributor
Contributor
Jump to solution

Critique my VMWare plan:

I have spent most of the past month trolling around these forms, working with VAR’s and vendors, and watching webinars to come up with a virtualization plan. I would like a bit of feedback to make sure that nothing is missing to prevent problems later. I also have experience with VMWare server for the past year, so I understand the basic concepts.

Current setup: 35 generic servers running on local RAID5 storage.

VM setup: convert 10 existing physical servers to VM as a proof of concept before going full on virtualized (read: small budget to start). Also, I am in higher education, so Microsoft Licensing and VMWare software costs are really not an issue. The severs scheduled to be converted have minimal CPU usage (less than 5% average), low Disk IO, and moderate network usage. Basically the servers are simple application servers although I would like to move at least one database server into VM in the future. I see this data center 95% virtualized within 2 years once everyone is convinced of how well it works. The solution that I am planning needs to be very scalable for the future.

The plan:

Storage: Equallogic PS100E 3.5TB SATA2 iSCSI (if I can get a budget bump I’ll try to get the PS3800 which is 2.3TB 15k SAS).

Logic: I have done a lot of research on the iSCSI storage, and I am confident that it will be able to support our needs. The only question is if I can get a SAS array now, or upgrade to it later and use the SATA as a replication partner. It will also be very scalable in size and performance into the future. They also have great Snapshot features for DR.

Servers:

2xHP DL385G2

w/ 2x 2.6GHz Dual-Core AMD Opteron 2218

12GB Ram

2x36GB 10k SAS disks in a Raid 1 for local storage / boot

2xQlogic iSCSI adapter.

Software: VMWare VI3 Enterprise.

Network: Use the existing Cisco Catalyst 6550 and make 3 new VLANS:

Vlan1: 10 ports for iSCSI (6 for the PS array, 2 for each server on the QLogic Cards). Jumbo-frames, SPT disabled, flow control, disable unicast storm control.

Vlan2: 2 ports for VMotion. I know I could connect the servers directly, but I am trying to plan for zero downtime future scalability (though not necessary).

Vlan3: 3 ports for Console / Management (1 per VM server, and 1 per server running Virtual Center.

Native VLAN: 4 ports, (2 ports per server) for connection to the rest of the network / internet.

I am also planning on getting VMWare certified to make sure I understand the finer points of virtual switches and the like.

Specific questions:

I am planning on using VLANS and new subnets for the VM specific traffic and letting the Catalyst 6550 do the Layer 3 inter-VLAN routing. Any disadvantage to this, or better way that it should be done?

I have considered splitting up the ports between two different network planes on the Cat. Switch, but this seems to be unnecessary, and would require moving a bunch of existing connections. If the switch that I am using fails, everything goes down, so it doesn’t seem like there is that much value in adding redundancy at this point. Bad idea? If needed, servers can go offline for several hours at night for maintenance upgrade / replacement.

For disaster recovery: currently there is no plan, as noted below, a replication site is in the works, but to start I am planning on scheduling snapshots and backing them up to disk (no tape in my environment). Acceptable data loss is 24 hours, and downtime of a few days.

Future plans:

Plan to add a PS array 45 miles off-site to replicate for DR. It will have a 20-45Mbps link.

Considering:

Using BackupExec with VSS control to do backups to disk.

Any other ideas / suggestions?

Thanks!

0 Kudos
1 Solution

Accepted Solutions
dawho9
Enthusiast
Enthusiast
Jump to solution

I was with you on this one but decided ultimately to separate it into a different switch because:

1. Security. VLAN can be hacked, sniffed, etc. Separate makes it more difficult to get at.

2. We are a school also, so it wasn't difficult to make the case as it didn't cost an arm and a leg (and you saved all that money anyways).

3. Performance. We have some other large database and our financial system on the switch already, so we didn't want to have to worry that we might run into a problem some other day.

Also, as was mentioned before, a proof of concept is really sweet but they become production over night. If there is one "bad" thing about VMware is that once the "people upstairs" know you have it and can provision a server in seconds, you end up with servers for everything over night. This is what happened to us with our first ESX servers and breaking it apart later, wasn't much fun. Don't really like to work weekends and such. Smiley Happy

Richard

View solution in original post

0 Kudos
27 Replies
acr
Champion
Champion
Jump to solution

Storage: Equallogic PS100E 3.5TB SATA2 iSCSI (if I

can get a budget bump I’ll try to get the PS3800

which is 2.3TB 15k SAS).

There SATA Arrays are very quick, either way you definaltely can replicate to either.. Works great..

Network: Use the existing Cisco Catalyst 6550 and

make 3 new VLANS:

Ok, i may be reading this wrong.. When you say using existing Cisco Switch..??

Id recommend separate Cisco Switches for iSCSI Traffic.. That way they get there own dedicated backbone..!!

0 Kudos
oreeh
Immortal
Immortal
Jump to solution

As acr already mentioned - split your storage network from your normal network.

One point is the dedicated backbone, the other is security.

0 Kudos
JDLangdon
Expert
Expert
Jump to solution

Servers:

2xHP DL385G2

w/ 2x 2.6GHz Dual-Core AMD Opteron 2218

12GB Ram

2x36GB 10k SAS disks in a Raid 1 for local storage /

boot

2xQlogic iSCSI adapter.

With the cost of hard disks these days, I'd throw in the 3rd disk and configure it as a hot spare.

Jason

Bill_Morton
Contributor
Contributor
Jump to solution

I was under the impression that by having a separate VALN I would be able to accomplish the security that I need, and for bandwidth, we currently we are nowhere near the 65Gb/s that the backplane of the Cisco Switch can handle. Also, the VLAN will make a new broadcast domain, so my thinking is that I gain more from using the existing switch with redundant supervisors and power supplies than a new switch.

In the future I could see adding a second switch to do the primary iSCSI traffic and using the VLAN group on the existing Cisco as the redundant link.

0 Kudos
Bill_Morton
Contributor
Contributor
Jump to solution

That's a great idea!

0 Kudos
Bill_Morton
Contributor
Contributor
Jump to solution

Just to add a bit more info, I just pulled the SNMP info from the switch:

SysTraffic: SNMPv2-SMI::enterprises.9.5.1.1.8.0 = INTEGER: 0

sysPeakTraffic: SNMPv2-SMI::enterprises.9.5.1.1.19.0 = INTEGER: 13

as per: http://www.cisco.com/warp/public/477/SNMP/cat-switch_backplane_util.html

0 Kudos
oreeh
Immortal
Immortal
Jump to solution

Handle iSCSI like you would handle FC.

Nobody tries to conect a FC card with a fiber switch even if the cables fit in the plugs. :smileygrin:

0 Kudos
Bill_Morton
Contributor
Contributor
Jump to solution

Even so there is going to be a bridge somewhere even if only for management. By the numbers, I still am not convinced that there is any overwhelming reason to try and separate everything. Remember that this is in part still a proof-of-concept for my organization.

I agree 100% that best practices would dictate using a completely separate, redundant, multi-path backbone for the iSCSI, and will make sure that it is in my proposal should \[when] our VM environment goes into full production, but for the time being, it is beyond my budget.

0 Kudos
oreeh
Immortal
Immortal
Jump to solution

As long as its a proof-of-concept thats ok.

Remember - proof-of-concepts grow faster to production than you think and then its much harder to separate.

Regarding management - with iSCSI I tend to a separate management LAN.

0 Kudos
Bill_Morton
Contributor
Contributor
Jump to solution

Just to clarify (if I am thinking incorrectly) you eventually want your iSCSI to get network / internet resources so that it can send out alerts, call home to the vendor (for replacement parts) or be managed remotely. Plus if we happened to want to put in on-site replication in a different building we would then only have to configure up the proper VLAN to accomplish this, as opposed to running new dedicated wire & switches.

0 Kudos
FOEDUS_-_Tyler
Contributor
Contributor
Jump to solution

Very comprehensive plan indeed. One question...you mentioned that you have 35 physical boxes, and that you wanted to provision 10 esx hosts..good news is, depending on your present utilization, i betcha you can provision 3 or 4 ESX hosts, and stand up 3-5 Virtual Machines per CORE. My guess is that you could consolidate your 35 servers into roughly 3-5 esx hosts !! Save on server for virtual center...voila ! Using DRS and Vmotion, let vmware allocate resources, might even save you some budget money to fund your DR plan !!

Unless you are pounding a heavy read/write sql database, the i/o of the equallogic solution (sata) should be perfect.

A bit biased, but we wrote the whitepaper on iSCSI best practices within Vi3...could be a decent read prior to launch on your project.

Are you planning on using VCB ?

Also, have you checked out Vizioncore as well? Almost a must have in any virtualized environment (and cheap!)

Overall, very well thought out...however, you may be able to leverage the virtual networking to a greater extent.

Ping me offline if you need any help.

Tyler

0 Kudos
dawho9
Enthusiast
Enthusiast
Jump to solution

I was with you on this one but decided ultimately to separate it into a different switch because:

1. Security. VLAN can be hacked, sniffed, etc. Separate makes it more difficult to get at.

2. We are a school also, so it wasn't difficult to make the case as it didn't cost an arm and a leg (and you saved all that money anyways).

3. Performance. We have some other large database and our financial system on the switch already, so we didn't want to have to worry that we might run into a problem some other day.

Also, as was mentioned before, a proof of concept is really sweet but they become production over night. If there is one "bad" thing about VMware is that once the "people upstairs" know you have it and can provision a server in seconds, you end up with servers for everything over night. This is what happened to us with our first ESX servers and breaking it apart later, wasn't much fun. Don't really like to work weekends and such. Smiley Happy

Richard

0 Kudos
AustinPowers
Enthusiast
Enthusiast
Jump to solution

EqualLogic is excellent iSCSI box. We use PS300E SATA and performance is great. I don't know if SAS box co$t will be justified unless you are in a huge db or xactional environment.

We had very conservative consolidation projections at deployment and have been pleased with utilization numbers. Comfortably running HA and DRS on 3 2-CPU dual core hosts with 9 VMs. Three of the VMs are SQL boxes that get pounded on pretty good by 100 - 175 users per day. Most performance numbers are consistenly below 50%, so we feel like we could easily add more VMs if/when needed.

Good luck keeping the students from hacking into secret places. Smiley Happy

Bill_Morton
Contributor
Contributor
Jump to solution

Wanted to get a few more responses before closing.

0 Kudos
Bill_Morton
Contributor
Contributor
Jump to solution

1. For the time being, the VLAN that I will create will not be accessible on any switch except for the server room which has good physical security.

As a part of the VM initiative, I am writing new policies as to a specific approval process for any new virtual server so that we don't end up with a virtualization bloat.

0 Kudos
Bill_Morton
Contributor
Contributor
Jump to solution

Tyler -

I would be interested in looking at your white paper on iSCSI. I have read Equallogic's guide, but it is always good to have multiple perspectives.

As for VCB, I am hoping to use it, but have not had a chance to drive it hands on yet. Since it is my name on the line, and I am pushing the technology, I want to have a rock-solid DR plan to back it up.

I have not come across Vizioncore, but I will check it out.

0 Kudos
Bill_Morton
Contributor
Contributor
Jump to solution

AustinPowers-

I have purposefully gone for a conservative consolidation number so that we can take advantage of HA should a box fail.

Luckily our students (Medical) are usually too busy to cause problems Smiley Wink but yes, security is my next initiative after the virtualization is implemented.

0 Kudos
Bill_Morton
Contributor
Contributor
Jump to solution

I just want to confront a lingering question that has been skirted a few times head on.

Do you eventually have a connection from your iSCSI network into the data network so that you can get alerts and call-home messages; or are most installations having a computer acting as a head node with one connection to the iSCSI network and one connection to the data network?

If it is just an issue of physical separation to ensure no mistakes are made, I can understand that. With the whole virtualization planning, I have somewhat successfully switched to the idea of carving out virtual resources from pools of physical resources, thus leading to the idea of using a VLAN for the iSCSI.

0 Kudos
happyhammer
Hot Shot
Hot Shot
Jump to solution

i would suggest that you check/test that you can apply jumbo frames to your specific iscsi storage vlan without it affecting the MTU's on the other vlans and L3 interfaces if you have them.

0 Kudos