VMware Cloud Community
pcable
Contributor
Contributor

vSphere ESX/vCenter Implementation

This is my first vSphere implementation. Hooray for research!

I'm wondering if anyone may have some notes/gotchas on what I've put together. At this point, the hardware exists, so, there's not much I can do about that. The purpose of the vSphere cluster I'm setting up is to basically host infrastructure services for a network of about 125 machines. This includes stuff like naming services (ldap and dns), license servers, a samba server, our kickstart infrastructure, our subversion repository, amongst some other stuff that is important to our day to day work. No "tier 1" applications like SAP or anything like that, just small stuff.

For hardware I have:

- 4 Sun Fire X4170 servers (2 quad core i7 55xx series processors each, 32gb of ram each, 2 iSCSI HBAs each, 8 NICs each)

- 2 EMC CLARiiON CX3-10c arrays, with 5tb of space raw and MirrorView/S.

The plan is to put 2 X4170s and 1 array in one datacenter, and the other 2 hosts/array in the other.

I currently have 6 luns per array - for simplicity I'll just go over the first datacenter's configuration (the opposite LUNs are mirrored on the other DC as well)

- host1-dc1 boot (25g)

- host2-dc1 boot (25g)

- host3-dc2 boot mirror (25g)

- host4-dc2 boot mirror (25g)

- vmpool-dc1 (1325g)

- vmpool-dc2 mirror (1325g)

I'm going to be running Site Recovery Manager as well. Datacenter 1 has an external oracle server, but datacenter 2 does not; I'm thinking i'll just run it in a VM since it'd be a small database to begin with.

I'm a little worried about storage performance, but.. I have no idea what is considered "normal throughput"

Any other thoughts?

0 Kudos
3 Replies
RParker
Immortal
Immortal

First off 2 machines in a cluster is hardly a "datacenter". I would put all 4 in the same cluster, for failover and better performance / load balance.

Next performance on the SAN is REALLY subjective.. there are many many many variables.

For starters, type of drives, Fiber / SATA, are they SAS / SCSI? Spindle speed, 15k, 10K. Next what kind of RAID, RAID 1, 5, 6, 10, 0+1.. etc.. Next how many disks PER RAID. I assume with 5TB you can lump all those drive into one huge array, and just setup different LUN's for the ESX servers.

That's how I would do it, but throughput there is no magic number.. lower IO doesn't mean lower performance, it dends on backplane, cache, how you configure RAID (write through, write back) the RAID controller types.. there really is not a hard and fast number we can give you to say WOW that's great or .. wow.. that's horrible.. So the best thing is build it, test it, and see for yourself.

Or call EMC for best case for VM Ware, that works.

0 Kudos
pcable
Contributor
Contributor

First off 2 machines in a cluster is hardly a "datacenter". I would put all 4 in the same cluster, for failover and better performance / load balance.

Oh, I'm aware that it is hardly a datacenter. However, I want to leverage SRM and from what I read, that's how SRM works. SRM will automatically break the mirror on the drive in a failover situation as well, so.. since we have the licenses for it, I might as well use it.

For starters, type of drives, Fiber / SATA, are they SAS / SCSI? Spindle speed, 15k, 10K. Next what kind of RAID, RAID 1, 5, 6, 10, 0+1.. etc.. Next how many disks PER RAID. I assume with 5TB you can lump all those drive into one huge array, and just setup different LUN's for the ESX servers.

SATA over iSCSI. 10k I believe, RAID5. 4 disks in the raid 5 group, 1 hot spare.

Thanks for your response RParker.

0 Kudos
cdc1
Expert
Expert

Doublecheck that the memory in those servers is installed so that it's

installed evenly between the processors, otherwise you're going to have

NUMA issues on your ESX hosts. It's probably been installed alright,

but better to doublecheck now when you haven't set anything up.

More info from the Docs for your servers:

Installing Server Options

DIMM Population Rules

The DIMM population rules for the Sun Fire X4170, X4270,

and X4275 Servers are as follows:

1. Do not populate any DIMM

sockets next to an empty CPU socket. Each processor contains a separate memory controller.

2. Each CPU can support a maximum of:

  • Nine dual-rank (DR) or
    single-rank (SR) DIMMs; or

  • Six quad-rank (QR)
    DIMMs with two per memory channel; or

  • Three QR DIMMs with
    one per channel and three DR or SR DIMMs.

3. Populate DIMMs by location according to the following rules:

    • Populate the DIMM slots for each memory channel that are the farthest from the CPU first.

For example, populate D8/D5/D2 first; then D7/D4/D1 second; and finally, D6/D3/D0.

    • Populate QR DIMMs first, followed by SR or DR DIMMs.

    • Populate QR DIMMs in blue sockets (D8/D5/D2) first then white sockets (D7/D4/D1).

Note that QR DIMMs are supported only in white sockets if adjacent blue socket contains a QR DIMM.

    • Populate QR, SR, or DR DIMMs in sets of three for each CPU, one per memory channel.

4. For maximum performance, apply the following rules:

    • The best performance is ensured by preserving symmetry. For example: adding 3x of same kind of DIMMs, one per memory channel; and, if the server has two CPUs, ensuring that both CPUs have the same size of DIMMs populated in the same manner.

    • In certain configurations, DIMMs will run slower than their individual maximum speed.

TABLE 2-3 Memory Conside rations and Limitations

1

DIMMs are available in two speeds: 1066 MHz and 1333 MHz

2

DIMM speed rules are as follows:

      • 3x of the same kind of DIMMs per channel = 800 MHz

      • 2x of the same kind of DIMMs per channel = 1333 MHz (for single-rank and dual-rank DIMMs) or = 800 MHz
        (for quad-rank DIMMs)

      • 1x of the same kind of DIMMs per channel = 1333 MHz (if using 1333 MHz DIMMs)

      • 1x of the same kind of DIMMs per channel = 1066 MHz (if using 1066 MHz DIMMs)

|

0 Kudos