Skip navigation
2015

Solutions4Crowds

March 2015 Previous month Next month

# Leia este post em português

According to VMware, the vShield Edge is an edge network security solution for virtual datacenters. It provides essential security features as gateway services, load balancing for Web, performance and availability.

With this, you can use the vShield Edge to load balance cells of the vCloud Director.

 

Ports used by vCloud Director:

Web Access - HTTP (80) and HTTPS (443)

Console Proxy - TCP (443)

 

1) Open the vShield Edge and the View Host & Cluster, click Datacenters > your datacenter > Network Virtualization > Edges > Add ( + );

 

loadbalance-vshield01.jpg

 

2) Set the name, hostname and description;

* If necessary, enable the HA option for high availability of the Edge.

 

3) Set the access credentials or keep the default credentials of the vShield Manager (User: admin | Password: default);

 

4) Set the size of the Edge, enable automatic generation of rules and click Add ( + ), then set the Cluster / Resource Pool and datastore for the appliance vShield Edge. If necessary, the Edge size can be changed later;

 

loadbalance-vshield04.jpg

 

5) Click on Add ( + ), set the name of the network interface, type, network will be connected and add the IP's to be used for balancing. As load balancing is configured for two services (Web Access and Proxy Console) is necessary to use two IP's (10.1.1.1 and 10.1.1.2);

 

loadbalance-vshield05.jpg

 

6) Enter the Default Gateway of the network;

 

7) Configure the firewall and HA according to your needs;

 

loadbalance-vshield07.jpg

 

8) Check the summary and if everything is ok, click Finish to start the deployment of the appliance;

 

9) Click the right button on the vShield Edge > Manage > Load Balancer > Pools;

 

10) To create the pool for the Web Access service click Add ( + ) > define a name (vCloud_Web_Access_Pool) > select HTTP (80), HTTPS (443) and ROUND_ROBIN > URI HTTP Service is setting the path "/ cloud / server_status" > add the IP of Cells Web Access vCloud > check the summary and click Finish;

 

loadbalance-vshield08.jpg

web-access-health-check.jpg

loadbalance-vshield10.jpg

 

* After clicking Publish Changes, click to enable the Load Balancer service.

 

loadbalance-vshield11.jpg

 

11) To create the pool for Proxy Console service click Add ( + ) > define a name (vCloud_Console_Proxy_Pool) > select TCP (443) and ROUND_ROBIN > URI HTTP Service is setting the path "/sdk/vimServiceVersions.xml" > change the port to TCP 443 > add the IP of Cells Console Proxy vCloud > check the summary and click Finish;

 

loadbalance-vshield12.jpg

console-proxy-health-check.jpg

loadbalance-vshield14.jpg

 

After applying the changes, you can view the poles as follows:

 

loadbalance-vshield15.jpg

 

12) Load Balancer, click Virtual Server > Add ( + ) > set the name of the virtual server (vCloud_Web_Access_VS) > enter the IP (created in Edge earlier) > select the existing pool (in this case vCloud_Web_Access_Pool) > set the HTTP options (80) and HTTPS (443) > click save;

 

loadbalance-vshield16.jpg

 

13) Load Balancer, click Virtual Server > Add ( + ) > set the name of the virtual server (vCloud_Console_Proxy_VS) > enter the IP (created in Edge earlier) > select the existing pool (in this case vCloud_Console_Proxy__Pool) > set the TCP (443) > click save;

 

loadbalance-vshield17.jpg

 

After applying the changes, you can view the virtual servers as follows:

 

loadbalance-vshield18.jpg

 

Perfect, load balancing is ready.

Now just create an entry in your DNS pointing to the virtual IP Edge set to Web Access, for example, create a record called "cloud" pointing to the IP 10.1.1.1.

 

loadbalance-vshield20.jpg

 

References:

# Leia este post em português

In most virtualization project, regardless of hypervisor, I realize that there are many questions concerning the size and number of LUNs. Many administrators choose to consolidation, others prefer the isolation.

Below I describe the characteristics, pros, cons and recommendations on creating LUNs for virtualized environments.

 

Isolation

In isolated storage, each virtual machine, regardless of its size or importance to the business, has a unique LUN. In some cases, the same virtual machine comes to have two or more LUNs used for C and D units, for example. Keeping a virtual machine by LUN is possible to obtain an improvement in performance because there is no competition in the LUN on the other hand you can get the maximum supported by the hypervisor, and there is a misuse of resources, because the free space on a LUN can not be used by another virtual machine.

An environment with 50 virtual machines becomes complex and very difficult to administration, because the storage will have 50 LUNs and the hypervisor 50 datastores / volumes, in other words, before creating a new virtual machine, you must create a new LUN, submit to the hypervisor and configure it.

 

isolation-vm.png

  • Pros
    • If a problem occurs with the LUN, will affect only a virtual machine;
    • Possible improvement in performance, there is no competition in the LUN.
  • Cons
    • Misuse of resources;
    • Complex environment;
    • Much time is dedicated to managing and maintaining the environment.

 


Consolidation

In consolidated storage, all virtual machines are stored in a single LUN. With this configuration, there is a better use of resources, maintenance and also in the management of the environment. However, competition in the LUN is great, may cause decreased performance of virtual machines.

An environment with 50 virtual machines becomes simple and easy administration, because the storage will have only one LUN and the hypervisor a single datastore / volume.

 

consolidation-vm.png

  • Pros
    • Better use of resources;
    • Simple environment;
    • Management and facilitated maintenance.
  • Cons
    • If a problem occurs with the LUN, will affect all virtual machines;
    • Possible reduction in performance, a lot of competition in the LUN.

 

 

Okay, what the size of the LUNs and storage type should I use?

You can see the pros of one type of storage is the cons of the other, so it is not possible to adopt a standard storage type and size of the LUN, all depends on the environment and needs. VMware recommends using a mix between the types of storagehaving consolidated storage with some privacy, so the LUNs can be large enough to hold X amount of virtual machines, but it also depends on the hardware (size disk, rpm, cache controllers, etc.) and virtual machines (avoid storing SQL and Exchange servers on the same LUN, for example). Most manufacturers suggest to perform a full analysis of the environment, in view of the hardware, needs and growth prospects, in order to then stipulate the amount and size of LUNs.

 

There are some precautions that should be taken before and after the creation of LUNs:

  • Knowing the limitations of hardware;
  • Knowing the size, number of virtual machines and estimate the possibility of growth;
  • Separate LUNs groups by dividing transport protocols (NFS, iSCSI, FC, etc), disks (SATA, SCSI, SAS, SSD, etc.) and virtual machines that require more I/O.

 

There are also some tools (thinkingloudoncloud | yellow-bricks) that assist in this decision, however, the best tool is to combine the knowledge of the environment, needs and common sense.

 

References:

# Leia este post em português

This script connect to vCenter Server and create a vSwitch Standard for using iSCSI network in three VMware ESXi hosts. Are used two network interfaces and three VMKernels (iSCSI01, iSCSI02 and Storage Heartbeat). To run the script you need the VMware PowerCLI, hostname or IP and administrator access credentials vCenter Server, hostname or IP of the ESXi servers, nomenclature for the vSwitch, VMKernels IP and iSCSI network. You must also define which network interfaces assigned to this vSwitch, besides the possibility of set the MTU. The script also enables a network interface for a VMkernel, while the other interface is unused.

script-vswitch-iscsi-2.jpg

script-vswitch-iscsi-3.jpg

 

Below are some details of the script:

 

######################### General Setup ###########################

$vcenter_name = "rc-vc01.rc.local"

$vswitch_name = "iSCSI"

$portgroup1_name = "iSCSI01"

$portgroup2_name = "iSCSI02"

$portgroup3_name = "StorageHeartbeat"

$device_nic1 = "vmnic2"

$device_nic2 = "vmnic3"

$mtu_value = "9000"

$network_mask = "255.255.255.0"

 

script-vswitch-iscsi-4.jpg

 

########################## VMware ESXi 1 ##########################

$esxi1_name = "rc-esxi01.rc.local"

$ip_portgroup1_esxi1 = "10.0.0.11"

$ip_portgroup2_esxi1 = "10.0.0.21"

$ip_portgroup3_esxi1 = "10.0.0.31"

 

# Conect to vCenter Server

Connect-VIServer $vcenter_name

# Create vSwitch in ESXi Hosts

Foreach ($vmhost in (get-vmhost))

{

New-VirtualSwitch -VMHost $vmhost -Name $vswitch_name -Nic $device_nic1,$device_nic2 -MTU $mtu_value

}

 

########################## VMware ESXi 1 ##########################

New-VMHostNetworkAdapter -VMHost $esxi1_name -PortGroup $portgroup1_name -VirtualSwitch $vswitch_name -IP $ip_portgroup1_esxi1 -SubnetMask $network_mask -FaultToleranceLoggingEnabled:$false -ManagementTrafficEnabled:$false -VsanTrafficEnabled:$false -VMotionEnabled:$false -Mtu $mtu_value

New-VMHostNetworkAdapter -VMHost $esxi1_name -PortGroup $portgroup2_name -VirtualSwitch $vswitch_name -IP $ip_portgroup2_esxi1 -SubnetMask $network_mask -FaultToleranceLoggingEnabled:$false -ManagementTrafficEnabled:$false -VsanTrafficEnabled:$false -VMotionEnabled:$false -Mtu $mtu_value

New-VMHostNetworkAdapter -VMHost $esxi1_name -PortGroup $portgroup3_name -VirtualSwitch $vswitch_name -IP $ip_portgroup3_esxi1 -SubnetMask $network_mask -FaultToleranceLoggingEnabled:$false -ManagementTrafficEnabled:$false -VsanTrafficEnabled:$false -VMotionEnabled:$false -Mtu $mtu_value

Get-VirtualPortGroup -VMHost $esxi1_name -Name $portgroup1_name | Get-NicTeamingPolicy | Set-NicTeamingPolicy  -MakeNicActive $device_nic1 -MakeNicUnused $device_nic2

Get-VirtualPortGroup -VMHost $esxi1_name -Name $portgroup2_name | Get-NicTeamingPolicy | Set-NicTeamingPolicy  -MakeNicActive $device_nic2 -MakeNicUnused $device_nic1

 

You can edit the script to use more or fewer hosts, network interfaces and VMKernels. Feel free to edit it according to your need.

 

Script download: EN-Create-vSwitch-iSCSI.ps1

 

References: