VMware Cloud Community
SBaldridge
Contributor
Contributor
Jump to solution

Configuration questions - Dell ESX Server

I am planning two new 3.0.2 ESX servers and I hoped to get comments from the experts in this forum.

Proposed systems (two identical)

-Dell PE6950 32G RAM, 3.0ghz quad dual-core, two 15k drives RAID1 for ESX o/s.

-Some ordinary Intel Pro gig nics.

-Each server will have two QLA4052C iSCSI HBA (on VMWare HCL).

-Servers will connect via iSCSI to our NetApp SAN already in production w/ other ESX.

Questions:

1. I thought we'd need two iSCSI HBA (per server) to avoid a single point of failure. Thoughts?

2. I'm new to iSCSI HBA, will ESX see these as ordinary nics in my Configuration>Network adaptors and can they be teamed for redundancy and performance as I do with my other ordinary gig network interfaces?

3. I intend to use both servers for VMotion, VM H/A. Thoughts or experiences with PE6950s?

Thanks in advance! I'll be at VMWorld so I hope to learn a lot there.

Cheers,

Scott

Reply
0 Kudos
1 Solution

Accepted Solutions
christianZ
Champion
Champion
Jump to solution

We have here PE6850 with 2 QLA4050 iscsi hba each - remember the only supported iscsi hba is the 405x - only PCI-X.

Esx sees these as scsi controllers and correct you need 2 for redundancy (max. 2 are supported).

Max. 2 means 2 X 4050 or 1 X 4052 \!!!

Message was edited by:

christianZ

View solution in original post

Reply
0 Kudos
7 Replies
christianZ
Champion
Champion
Jump to solution

We have here PE6850 with 2 QLA4050 iscsi hba each - remember the only supported iscsi hba is the 405x - only PCI-X.

Esx sees these as scsi controllers and correct you need 2 for redundancy (max. 2 are supported).

Max. 2 means 2 X 4050 or 1 X 4052 \!!!

Message was edited by:

christianZ

Reply
0 Kudos
SBaldridge
Contributor
Contributor
Jump to solution

Now I am very glad I posted. I might have purchased the 4052 which are two ports each. To achieve my two individual HBAs I need two 4050 as you are using... ok great.

I wonder if I could ask a favor - see on Dell's site that there are two QLA4050 HBAs:

QLA405C-CK SANBlade PCI-X to iSCSI, Dell Part A0631542

QLA4050C-E-SP SANBlade PCI-X to iSCSI, Dell part A0723517

I assume you are using the first one above?

Reply
0 Kudos
glynnd1
Expert
Expert
Jump to solution

Scott,

I had the same question on the two QLA4050 HBAs. As far as I could find out they are the same, except that the -E-SP has a particular level of firmware & BIOS that is EMC certified.

As for your server configuration, I would suggest leaving your self the option of upgrading to 64GB of memory, ie get 8x4GB rather then 16x2GB. Yes this will cost you an extra $2k per server.

christianZ
Champion
Champion
Jump to solution

As I heard the E-SP modell are cheaper than the others, although in fact the same hw (difference ~ 200$) - I would get the E-SP then.

Reply
0 Kudos
grasshopper
Virtuoso
Virtuoso
Jump to solution

Also consider that 4 socket quad cores are coming out soon. This would double your logical procs for the same VMware licensing costs (based on socket, not core). You can also swap the procs out later (dual to quad) at a cost of ~US$5k.

Since with VMware, we like to throw 4GB of RAM at each logical, you would then want that 64GB (~US$5k for the extra 32GB). So follow David's advice and get the 4GB DIMMs, even if you only start with 32GB.

As far as NICs, personally I would go with the dual ports instead of the quad ports. Also, you'll probably want a DRAC card for out of band management.

I could also comment on the wicked new virtualization server Dell is preparing to release for beta, but not sure if I'm allowed to disclose (oops I think I just did). Ask your Dell rep if interested.

Don't forget VirtualCenter (VCMS) licensing + SQL license. You can run VirtualCenter and the SQL server on a couple VMs and save the hardware purchase.

Anyway... best of luck, sounds like a nice setup so far.

Reply
0 Kudos
SBaldridge
Contributor
Contributor
Jump to solution

Thanks for the advice. The DRAC would be cool. We use IP KVM so I can get a lot of the features like remote console.

I talked to our Dell rep and he got me talking to a Dell engineer who told me what should have been obvious to me: the PE6950 can't use the PCI-X HBA so the QLogic is useless in a 6950. He suggested the 6950 would be a good call for this ESX project but buy the 4G DIMMS (sound like familiar advice?). I've done that on my preliminary order which hasn't been placed yet. I haven't heard of a new virtualization server... sounds sexy. Got any timeline on a release date?

Does anyone have a comment on running the PE6950 without a iSCSI HBA and just team two gig nics instead? We have gig backbone here including to the SAN. I won't be booting the ESX from the SAN.

Reply
0 Kudos
grasshopper
Virtuoso
Virtuoso
Jump to solution

I don't have a date on the release... beta starts soon though.

I use the 'Intel PRO 1000PT Cu, Dual PortPCIe, NIC (430-0959)' for my iSCSI connections to our Netapp 3020... I was blown away by the performance. No complaints.