VMware Cloud Community
Chad2011
Contributor
Contributor

New to ESXi and vCenter

Hi,

I recently purchased the Essentials kit and have been looking for documentation on sizing and best practices.

Currently I am not using shared storage but in the future I'll most likely upgrade to Essentials Plus and Standard as I grow, I expect to go grow to between 5 to 15 ESXi hosts over the next few years and once I pass the 3 host LIC Essentials/Essentials Plus offers I'll move on up.

So for the vCenter host I purchased a single quad core with hyper-threading and 8GB of memory (the recommended config from Dell) would this support SQL/vCenter/Update Manager on the same host growing up to 15 ESXi hosts?

SQL/vCenter install, coming from Windows/Hyper-V SCVMM I'm use to setting up SQL with an Active Directory service account and using windows authentication and for security reasons leaving mixed mode disabled. Using the SYSTEM account that vCenter installs with would it limit any functionality (for example in SCVMM to use shared ISOs you have to run the services with a domain account) in a lab I have setup vCenter with both windows auth and SQL to see how the install goes, only thing I ran into was you have to install the software logged in as the service account.

For the ESXi hosts I'm using a dual quad core with 32GB of memory and an 8 disk RAID 10 array (Hyper-V I would use 2 disks in RAID 1 for OS and 6 disk RAID 10 for VMs) I'm currently messing around with running from embedded flash memory (USB drive without RAID 1) is it common practice to run on USB or install to HDD? If installing to HDD to you still follow the RAID 1 for OS and another array for VMs? One thing I was really hoping for was the ability to use all spindles in a single array instead of losing 2 for the OS install.

The ethernet vSwith related stuff I think I have figured out, I'll use 2x2 port NICs in a LAG/LACP group with multiple VLANs and another 1x2 port NIC for the VMKernel. Dell reference they like using 6 NICs 4 for VMKernel/vMotion/VM Network and 2 ports for iSCSI. (I might do that and prep for adding shared storage in the future).

Thank you for any help you can offer.

0 Kudos
8 Replies
RParker
Immortal
Immortal

Chad2011 wrote:

So for the vCenter host I purchased a single quad core with hyper-threading and 8GB of memory (the recommended config from Dell) would this support SQL/vCenter/Update Manager on the same host growing up to 15 ESXi hosts?

SQL/vCenter install, coming from Windows/Hyper-V SCVMM I'm use to setting up SQL with an Active Directory service account and using windows authentication and for security reasons leaving mixed mode disabled. Using the SYSTEM account that vCenter installs with would it limit any functionality (for example in SCVMM to use shared ISOs you have to run the services with a domain account) in a lab I have setup vCenter with both windows auth and SQL to see how the install goes, only thing I ran into was you have to install the software logged in as the service account.

Yes, that would work fine.  But since you are NEW, I suggest you try the vCenter Appliance.  It is linux but it's pre-built you don't need to do ANYTHING, not even setup a Database, it's all internal.  you deploy the appliance, you set it to use embedded DB.  Turn it on.  Done.  15 minutes and you are to go.  Save that physical host for something else, make the appliance a VM on a host (vSphere 5.0)

SQL should be SQL authentication, not AD, which means mixed mode SHOULD be enable.  The LAST thing you want is an AD interruption causing a data integrity problem because it couldn't authenticate to the server.

For the ESXi hosts I'm using a dual quad core with 32GB of memory and an 8 disk RAID 10 array (Hyper-V I would use 2 disks in RAID 1 for OS and 6 disk RAID 10 for VMs) I'm currently messing around with running from embedded flash memory (USB drive without RAID 1) is it common practice to run on USB or install to HDD? If installing to HDD to you still follow the RAID 1 for OS and another array for VMs? One thing I was really hoping for was the ability to use all spindles in a single array instead of losing 2 for the OS install.

The ethernet vSwith related stuff I think I have figured out, I'll use 2x2 port NICs in a LAG/LACP group with multiple VLANs and another 1x2 port NIC for the VMKernel. Dell reference they like using 6 NICs 4 for VMKernel/vMotion/VM Network and 2 ports for iSCSI. (I might do that and prep for adding shared storage in the future).

Once ESX boots, the drive is a non-issue.  USB will be fine.  Don't waste RAID 1.  You don't need ANY drives with vSphere 5.. in fact you can go COMPLETELY headless (see Autodeploy).  No drives, no install.  Host boots, get's an IP runs the OS on the fly.. that's it.

Also you can RAID all the disks even if you do install the OS, it will partition itself and keep the datastore separate.  I never personally create a RAID 1 for the OS, that is a ridiculous waste of drives, spindles and space.  I make it one BIG drive, more spindles, better performance.  The OS doesn't take ANY IOPS anyway.. you can always create Virtual Disks in the RAID setup to partition it if you must, but that's your call.  I say RAID ALL the disks in a physical box as 1 RAID.  If you are worried about performance, do RAID 50 then... Not RAID 10, RAID 50.

Chad2011
Contributor
Contributor

I did look at the VCA and noticed it didn't support Update Manager and the embedded DB I read was for very small configs like 5 hosts or less, I'd like to deploy once and be properly sized to grow simply only needing to add additional LICs. I'll have a look at Auto deploy (I didn't know what it was but that that sounds interesting).

So after your comment on vCenter install and SQL auth that makes since, I guess that is why they use the SYSTEM account in the documents I have found on installing SQL.

Thanks you have given me a new direction to look at.

0 Kudos
golddiggie
Champion
Champion

Instead of having a physical box for vCenter, I'd just make it as a VM. I've done this, with great success, at several places now. This gives you several advantages, not the least of which is you can use HA with it (once you have the feature on your hosts). So, if you lose a physical box, the vCenter Server is still running since it's a VM (not the virtual appliance). I always have the SQL database on another VM for the vCenter Server to use. That allows you to make the vCenter Server much leaner.

I'm rather surprised that Dell recommended you usa an actual physical box for the vCenter Server. VMware activly promotes/recommends going virtual for the vCenter Server these days. Even for 'up to 15 hosts' you should be fine with vCenter as a VM. With reasonable spec hosts that is (which it sounds like you're going with)...

For the network configuration, design it with no single point of failure. So, have your Management Network on one vSwitch, spanning two network cards (such as one of the onboard ports and one of the add-on card ports). Same thing for vMotion vSwitch (for TINY environments, you can place both on the same vSwitch, if you network it up right). I would span the VM traffic across a pair of ports too (from different cards). Right there, you need at least six physical network connections. If you combine the Management Network and vMotion, and slice it down to just two ports (alternating active/standby on the port groups) you can use two ports for iSCSI traffic.

Setting the host to boot from an USB flash drive, just make sure you configure it properly, or size it correctly. I generally recommend going with a quality make/model in the 4-8GB size range. Sure, you can use smaller, but with the sticks being so cheap these days, it almost doesn't make sense to go with a tiny one. I've had good results with both 4GB and 8GB SanDisk Cruzer models...

0 Kudos
Chad2011
Contributor
Contributor

Looking at Auto-Deploy it looks to only be a feature on Enterprise Plus so I think that is out of the question.

I purchased a handful of 4GB Kingston G3 USB drives.

I've never been a fan of virtualizing the management software used to manage your virtual infrastructure, HA would make it a bit better but I prefer that type of software to be on a physical host just in case I have an issue on the virtual host it's running on. I also try and keep anything not pertaining to services being offered off the hosts. So IT management infrastructure I use physical and RDS, ERP etc etc I virtualize and small SQL loads I'll virtualize. I'm sure my stance on that will change in time and I'll have virtual hosts dedicated to infrastructure management and others for services.

Thanks

0 Kudos
RParker
Immortal
Immortal

Chad2011 wrote:

I did look at the VCA and noticed it didn't support Update Manager and the embedded DB I read was for very small configs like 5 hosts or less, I'd like to deploy once and be properly sized to grow simply only needing to add additional LICs. I'll have a look at Auto deploy (I didn't know what it was but that that sounds interesting).

So after your comment on vCenter install and SQL auth that makes since, I guess that is why they use the SYSTEM account in the documents I have found on installing SQL.

Thanks you have given me a new direction to look at.

Well not sure why it says that, I have update manager, with vCenter Appliance it works fine.

The configs are wrong, read the documentation, page 43 of the vSphere Management guide says less than 100 hosts and 1000 VM's, not 5 hosts or less, so Yellow Bricks (or whereever you read that) is wrong.

0 Kudos
RParker
Immortal
Immortal

golddiggie wrote:

I'm rather surprised that Dell recommended you usa an actual physical box for the vCenter Server. VMware activly promotes/recommends going virtual for the vCenter Server these days. Even for 'up to 15 hosts' you should be fine with vCenter as a VM. With reasonable spec hosts that is (which it sounds like you're going with)...

You are surprised?  They vendors are losing money on hardware.. if they DON'T suggest you use hardware for some things they will lose money.. it's one less piece of hardware to sell.. not to mention warranty, and potential licenses.  I am NOT surprise by this at all.

0 Kudos
golddiggie
Champion
Champion

RParker wrote:

golddiggie wrote:

I'm rather surprised that Dell recommended you usa an actual physical box for the vCenter Server. VMware activly promotes/recommends going virtual for the vCenter Server these days. Even for 'up to 15 hosts' you should be fine with vCenter as a VM. With reasonable spec hosts that is (which it sounds like you're going with)...

You are surprised?  They vendors are losing money on hardware.. if they DON'T suggest you use hardware for some things they will lose money.. it's one less piece of hardware to sell.. not to mention warranty, and potential licenses.  I am NOT surprise by this at all.

I guess that's because I've not had to deal directly with hardware manufacturers when designing an environment. I typically work with high level partners that are making their money on billable hours (for racking and stacking, etc.)... They have more real world experience and would rather we spend less on hardware (and still get the job done right) so that we can spend more on getting their high level engineers/architects in for services. Works pretty well for us too, since we typically cut the hours they want to have us billed for since I/we can do a good amount ourselves.

I also think that the better partners follow VMware's recommendations more closely. It could be so that they don't risk their partner status from people complaining about how it was designed by partner X and it runs like a half dead dog.

0 Kudos
RParker
Immortal
Immortal

golddiggie wrote:

RParker wrote:

golddiggie wrote:

I'm rather surprised that Dell recommended you usa an actual physical box for the vCenter Server. VMware activly promotes/recommends going virtual for the vCenter Server these days. Even for 'up to 15 hosts' you should be fine with vCenter as a VM. With reasonable spec hosts that is (which it sounds like you're going with)...

You are surprised?  They vendors are losing money on hardware.. if they DON'T suggest you use hardware for some things they will lose money.. it's one less piece of hardware to sell.. not to mention warranty, and potential licenses.  I am NOT surprise by this at all.

I guess that's because I've not had to deal directly with hardware manufacturers when designing an environment. I typically work with high level partners that are making their money on billable hours (for racking and stacking, etc.)... They have more real world experience and would rather we spend less on hardware (and still get the job done right) so that we can spend more on getting their high level engineers/architects in for services. Works pretty well for us too, since we typically cut the hours they want to have us billed for since I/we can do a good amount ourselves.

I also think that the better partners follow VMware's recommendations more closely. It could be so that they don't risk their partner status from people complaining about how it was designed by partner X and it runs like a half dead dog.

All valid statements.. Dell like HP is TRYING to do more services.. but that is VERY competitive.  If you have a customer (like us) that uses Dell for pretty much everything, we use less hardware (when it comes time for refresh) and since we used their services in the past (like Dell) we don't need them again to migrate to new hardware..

It may be trivial, but one box less for each 1000 customers .. is a TON of money they will lose (potentially) if they can't make up for it with services... Dell, HP, IBM all struggle with this.. that's why the push for Tablet, software sales, consulting, and vendor for 3rd party software, they have to make it up someplace.  Their bread and butter is hardware however.  Each hardware they don't sell is less units out the door... I agree with your assement, just saying they will sell hardware every chance they get even when it DOES'T agree with VM Ware best practices.  It doesn't hurt to ask...

0 Kudos