VMware Cloud Community
Maniac47
Contributor
Contributor

Challenges on networking setup and vCenter install? New 5 node 5.5 cluster

Hi Everyone,

I'm about to set up a new 5 node, ESXi 5.5 cluster.  I'd like to post my current thoughts/ideas on how this is to be set up , but I do have questions on the best way to  go about everything.  This is a new data-center build out, so connectivity to other items (such as Active Directory servers) located elsewhere may not be available day 1, but it does exist. 

Each server has 8 network interfaces (2 x quad cards).  Initially, each network segment will have 2 interfaces teamed (across separate switches/cards), and the VLANs I have laid out currently are:

VLAN 1 = Management network/HA

VLAN 2 = NFS network

VLANs 3-8 = Server Networks

VLAN 9 = vMotion

Obviously these are not the actual VLAN IDs, just trying to keep everything simple.  We would start by installing ESXi 5.5 on each of the hosts and configuring their management network information. We'd then be looking to install the vCenter Virtual Appliance (connecting to one of the hosts and deploying the appliance downloaded from VMWare).  Assuming this is OK so far (please let me know if there are any issues at this point), we'd make sure the appliance can resolve each of the hosts by short names and fqdns via the Management network (editing the hosts file if AD isn't available) and update the hostname via the management page before running through the install. 

My question now is, if Active Directory is not available right away, how should we proceed with the install of the appliance?  We might not have an AD server to point to for NTP time synchronization, or for the domain configuration for joining the domain/authentication. I've seen others have issues because of the time synchronization being off on the servers. 

Another question is since this appliance would be initially deployed with 1 NIC connected to the Management/HA network and static IP, should we configure another Port Group on the ESXi host and assign a second NIC to the appliance to account for the Server Networks and walk it off onto a VDS later, or should that be done after it is already managing the cluster adding this other connection via a new Port Group on the VDS?

This is the first time I'd be installing ESXi on this type of scale with this many VLANs (I've previously been responsible for much smaller vCenter deployments, single network with no dedicated VLANs for HA/vMotion/NFS/Management) just for the benefits of virtualization and single point of management.  Any advice or input someone could provide based on previous experience or best practices I may have missed would be extremely helpful

Thanks guys!

7 Replies
Gortee
Hot Shot
Hot Shot

Evening,

Lots of questions.  Allow me to ask some to help narrow it down. 

I would suggest the following course of action:

1. Install ESXi and setup management network using a single nic (native or vlan tagged)

2.Direct connect to the ESXi host configure shared storage and make sure all node have the storage

3.Install vcenter appliance and configure to use same network as ESXi Hosts (you will need host entries if no DNS is available you may want to reference the hosts in vcenter by ip)

4.Point the whole pile to public ntp servers

5.Setup your dvswitch and migrate management off

Now to a larger question with 8 nic's to use how do you divide traffic. 

First real question to ask is how much bandwidth does NFS require?  I assume these are all 1GB nic's and your NFS has a single target that is a VIP that moves between controllers.  As such you really can only one one nic to use for NFS.  I would suggest the following:

2 X Nic for NFS (active/standby) Running tagged VLAN2 or Native depends on switch config

4 X Nic for Server networks using load balanced teaming (active/active/active/active) tagged vlans

2 x nic for Management / vmotion - Use Them active/standby management and standby/active for vmotion with NIOC to control flows in case they end up on same connection.

It depends thou... if you plan on having a lot of vmotion you may want to reduce the amount of nic's for vm's and add to vmotion.

Also look to keeping loads off a single nic (For NFS use one port on one nic and another nic for the second port.

I hope this helps let me know if you have additional questions.

Thanks,

J

Joseph Griffiths http://blog.jgriffiths.org @Gortees VCDX-DCV #143
Maniac47
Contributor
Contributor

Hi Gortee!

Thanks for replying - sorry its taken me so long to get back to you, we've actually been in the DC setting this stuff up.

So one annoying issue I'm having is I can't connect to the vCenter Appliance from anywhere but the management network (VLAN 1).  As a result, the only items that can use the web client has to be on the same network segment.  Basically, the management IP can only be reached/accessed from the management segment, and the server IP can only be reached by others in the server segment, even though both are route-able and others physical machines on these segments are not having these issues.  Here's my steps so far and what I'm experiencing:

1. I installed all the hosts, configured their management networks and verified everything can talk to eachother (vmk0)

2. I deployed the vCenter appliance on host1 and configured a static address via the CLI (/opt/vmware/share/vami/vami_config_net)

3. After being able to access the management page, I made sure all time settings were good with all hosts/vca, and ran through default install (no issues)

4. All servers and VCA can ping/vmkping each other, and a cluster was created and I configured the rest of the PGs, succesfully testing vMotion and such.

5. I then added a second NIC to the VCA, attached it to the Server Networks PortGroup (VDS), and IP'd it following this article:Musings on Information Technology - A view from the trenches: Setting up vCenter Server Appliance (v... which also worked (able to ping others on the server network).

I haven't moved the Managment vmk0/adapters to the VDS yet; the vswitch0 that was set up by default which still has the Management vmkernel (vmk0) and the "VM Network" portgroup with the vCenter server running there (again, on the Management VLAN).

For whatever reason, I cannot access anything on the vCenter Appliance from anywhere but the management network.  This makes managing via the web or vsphere client impossible unless I'm on that network segment.  I also cannot ping it by its Server Network address, unless I'm in that particular Server VLAN network segment, so very similar to the Management side of things.  Both are route-able and we basically have a set of any-any rules in place while we build.

I'm able to access all  the ESXi hosts via ports 80/443, and I can connect to each of them from vsphere clients outside of the Management VLAN/they respond to pings.  The VCA is literally 1 IP from the last ESXi host, in the same subnet, and I cannot reach it from anywhere else.  Could anyone help with this?

Reply
0 Kudos
Maniac47
Contributor
Contributor

Ok - a quick update:

I checked the eth0 configuration via the CLI and it didn't have a default gateway set.  I remember trying to set it via the web interface, it kept saying it saved but it would never seem to stick.  I configured it with the vami_config_net command from before and reset the network.  I'm able to access it now from other subnets, which is very good.

The Server Network is still inaccessible, but that's most likely because eth1 is on a different subnet and can't reach the default gateway, which is on the Management subnet.  Does this all make sense/is there anything else that looks like I've missed something?

Thanks!

Reply
0 Kudos
Maniac47
Contributor
Contributor

Alright - so far so good!

I have one last question, as it relates to the gateway/routing and what is recommended.  Thanks for the help Gortee, definitely came in handy to double-check everything, all is working well so far.  Since changing the server for the default route to be the management network,  we're able to reach it from other subnets without issue (just missing the gateway).  Now that it is also dual-homed, should we add routes to for eth1 to communicate with other networks?  or should we create the default route as the Server Network with a default route on eth0 for the Management side of things?

Just wondering what is a Best Practice/what is recommended as far as resiliency and ease of management.

Thanks!

Reply
0 Kudos
Gortee
Hot Shot
Hot Shot

Evening,

Sorry about the delay.  I had a few major transitions at work and downtimes back to back.  It looks like you have solved most of the issues.  When you can get out but not back in it's one of two things gateway or subnet issues. Glad it worked out. 

On your question about best practice it really depends on your requirements.  

Normally you setup something like this:

Management network would contain vsphere and any esxi hosts

Virtual machine network with virtual machines.  The vcenter / hosts don't need a nic in this network or anything they control vm's via virtual hardware chip and vmware tools. 

Normally it's a best practice to separate vmotion and management but I avoided it on the original design in order to provide max ports for virtual machines and NFS based storage.  

Please provide more details if you have additional questions and I'll get to it right away.

Thanks,

J

Joseph Griffiths http://blog.jgriffiths.org @Gortees VCDX-DCV #143
Reply
0 Kudos
carls64
Contributor
Contributor

Greetings, I found your post here and was very pleased with what I read. Your experience seems to blessed you with a good amount of knowledge in this area. Here is my set up with 3 Hosts. Recently we started experiencing some iSCSI network latency issues (above 20ns) between the VM's would you be so kind to give me some recommendations based on our environment? Below is our inventory of equipment, and its current basic configuration. We are / have recently added a couple new stacked switches (layer 3) and have moved the VMotion network onto its own VLAN and are working tword doing the same for the MGMT lan. (We have one of the hosts already on its own VLAN/Network).

3 ESXI hosts (2xDell R620,1xDell R720) each with 3x4 port NICS (12 ports total), 64GB RAM. (Wish I would have put more on them ;-))

1 Dell MD3200i iSCSI disk array with 12 x 450GB SAS 15K Drives (11+1 Spare) w/2 4 port GB Ethernet Ports

2 x Dell 5424 switches dedicated for traffic between the MD3200i and the 3 Hosts

Each host is connected to the iSCSI network though 4 dedicated NIC Ports across two different cards

Each Host has 1 dedicated VMotion Nic Port connected to its own VLAN connected to a stacked N3048 Dell Layer 3 switch

Each Host will have 2 dedicated (active\standby) Nic ports (2 different NIC Cards) for management

Each Hosts will have a dedicated NIC for backup traffic (Has its own Layer 3 dedicated network/switch)

Each host will use the remaining 4 Nic Ports (two different NIC cards) for the production/VM traffic)

Currently those remaining 4 VM nics are connected to our current production switch (NO VLANS) so we have a lot of broadcast traffic with about 150 devices connected. We will/are working tword creating separate VLAN's and segmenting our network out. Hard to do while the environment is in production. So we are carefully planning.

Also are there other option to gaining some performance out of our iSCSI environment that may not be obvious to us. When I built this system 4 years ago, I tried to plan ahead, asking questions and making sure I had plenty of redundancy and bandwidth, now I am finding out maybe not the case, as when snapshots are being created for the backups (Unitrends client) more often than not latency issues are causing orphaned vmdks that we have to delete later. (Consuming disk space)

If there is any further information you need, or screenshots of our network config on the hosts I would be glad to share them with you.

Thank you in advanced!

Reply
0 Kudos
Gortee
Hot Shot
Hot Shot

Carls64,

Thanks for your kind words.  I wrote a blog article about your questions.  http://vexpert.me/pa  Please feel free to comment here or there about it and any additional questions you might have.   I figured it really needed diagrams so I did it in blog format.  I hope it helps.

Thanks,

Joseph

Joseph Griffiths http://blog.jgriffiths.org @Gortees VCDX-DCV #143
Reply
0 Kudos