Jologs
Contributor
Contributor

New EQ SAN & VMWare setup ?

I have a new EqualLogic PS160E. I want to put about 7 VMware instances on one physical Dell 2950 box. The physical box has 900GB Raid 5 for its DAS & 16GB RAM & two 3GHz CPU & 6 NICs (2 built in & one 4 port PCI GB ethernet). The 7 VMware instances are not very high load servers (DNS, Domain Controllers, File & Print, etc...)

My question is, do I need to create seven different volumes in the PS SAN for each VMWare instance and load each VMWare instance with Microsoft iSCSI initiators ?

For my separate VLAN for the SAN, do I put two IPs on each VMWare instance, one IP to see the SAN subnet while the other is for the production subnet so clients can connect to the VMWare server instance ?

Thanks and any suggestions on the initial design is very much appreciated.

0 Kudos
6 Replies
christianZ
Champion
Champion

For clarity-

you have one PS100E from Equallogic and one PE2950 with 900 GB internal disk capacity.

Is that correct ?

0 Kudos
Jologs
Contributor
Contributor

yes, I have the Equallogic PS100E but they call it a 160E since only half the cage is populated with 7 x 500GB SATA drives. 3.5TB raw & Raid 50 setting.

My physical Dell 2950 server has 900GB useable space. I plan on installing all seven of the VMWare instances (all Windows) c:\ OS drive on the physical's server's DAS while all d:\ data drives are all on the EQ iSCSI volumes. Is this a good design ? I'm not comfortable putting the OS files on the EQ SAN (boot to iSCSI SAN). I'm also a little fuzzy on how the IP assignment between the physical NICs IP in ESX & the IP asssignment of the virtual server's NICs. I have the SAN VLAN all setup on my Cisco switch with jumbo frames enabled.

Thanks for the help.

0 Kudos
msmenne17
Enthusiast
Enthusiast

I'm not exactly familiar with your exact setup, but here's how I would set it up:

Setup you SAN into 2 or 3 LARGE disk groups and create one LUN per each.

The SAN gets attached to the VMWare HOST via iSCSI. The VMWare HOST then formats the LUNs as VMFS volumes. The virutal disks (VMDKs) are stored on the VMFS volume. The guest operating systems ("instances") simply see local storage. The virutalization masks the fact that is' SAN attached or anything else.

If you can, return the 900GB of DAS and get something smaller (2x146GB for example). This is not boot from iSCSI SAN. This is iSCSI attached SAN and the ESX boot is local disk with the SAN as storage. Yes, technically the guest VMs boot from there, but it's not the same.

This way, if you add a second VMWare HOST (ESX Server), you can use vMotion, DRS and HA. With VMs having disks on local storage, those options are not available.

christianZ
Champion
Champion

This constelation is possible and practicable (you will use ms iscsi initiator in windows vms )but in this case when you get a second esx server you won't be able to do vmotion and DRS.

You have enough nics to create a separate vswitch for iscsi access (with e.g. 2 nics and teaming/mac address). Your vms will have then 2 vnics - 1. for normal lan access and the 2. for iscsi.

The 1. vnic will be connected to the first vswitch (normal lan access) with e.g. 2 nics and the 2 vnic will be connected to the second vswitch (only for iscsi).

The ms iscsi initiator is faster than the esx software initiator - that was tested here often.

With this configuration you will be able to use vss windows integration into eql (only disks d: e: ...) - that makes the creating of consistent snapshots possible.

Hope that's clear.

Jologs
Contributor
Contributor

Thanks for the very helpful information, I will investigate & decide what options are best for the environment. Thanks again.....

0 Kudos
Jologs
Contributor
Contributor

Thanks for the very helpful information, I will investigate & decide what options are best for the environment. Thanks again.....

0 Kudos