Skip navigation

Solutions4Crowds

March 13, 2015 Previous day Next day

# Leia este post em português

In most virtualization project, regardless of hypervisor, I realize that there are many questions concerning the size and number of LUNs. Many administrators choose to consolidation, others prefer the isolation.

Below I describe the characteristics, pros, cons and recommendations on creating LUNs for virtualized environments.

 

Isolation

In isolated storage, each virtual machine, regardless of its size or importance to the business, has a unique LUN. In some cases, the same virtual machine comes to have two or more LUNs used for C and D units, for example. Keeping a virtual machine by LUN is possible to obtain an improvement in performance because there is no competition in the LUN on the other hand you can get the maximum supported by the hypervisor, and there is a misuse of resources, because the free space on a LUN can not be used by another virtual machine.

An environment with 50 virtual machines becomes complex and very difficult to administration, because the storage will have 50 LUNs and the hypervisor 50 datastores / volumes, in other words, before creating a new virtual machine, you must create a new LUN, submit to the hypervisor and configure it.

 

isolation-vm.png

  • Pros
    • If a problem occurs with the LUN, will affect only a virtual machine;
    • Possible improvement in performance, there is no competition in the LUN.
  • Cons
    • Misuse of resources;
    • Complex environment;
    • Much time is dedicated to managing and maintaining the environment.

 


Consolidation

In consolidated storage, all virtual machines are stored in a single LUN. With this configuration, there is a better use of resources, maintenance and also in the management of the environment. However, competition in the LUN is great, may cause decreased performance of virtual machines.

An environment with 50 virtual machines becomes simple and easy administration, because the storage will have only one LUN and the hypervisor a single datastore / volume.

 

consolidation-vm.png

  • Pros
    • Better use of resources;
    • Simple environment;
    • Management and facilitated maintenance.
  • Cons
    • If a problem occurs with the LUN, will affect all virtual machines;
    • Possible reduction in performance, a lot of competition in the LUN.

 

 

Okay, what the size of the LUNs and storage type should I use?

You can see the pros of one type of storage is the cons of the other, so it is not possible to adopt a standard storage type and size of the LUN, all depends on the environment and needs. VMware recommends using a mix between the types of storagehaving consolidated storage with some privacy, so the LUNs can be large enough to hold X amount of virtual machines, but it also depends on the hardware (size disk, rpm, cache controllers, etc.) and virtual machines (avoid storing SQL and Exchange servers on the same LUN, for example). Most manufacturers suggest to perform a full analysis of the environment, in view of the hardware, needs and growth prospects, in order to then stipulate the amount and size of LUNs.

 

There are some precautions that should be taken before and after the creation of LUNs:

  • Knowing the limitations of hardware;
  • Knowing the size, number of virtual machines and estimate the possibility of growth;
  • Separate LUNs groups by dividing transport protocols (NFS, iSCSI, FC, etc), disks (SATA, SCSI, SAS, SSD, etc.) and virtual machines that require more I/O.

 

There are also some tools (thinkingloudoncloud | yellow-bricks) that assist in this decision, however, the best tool is to combine the knowledge of the environment, needs and common sense.

 

References:

# Leia este post em português

This script connect to vCenter Server and create a vSwitch Standard for using iSCSI network in three VMware ESXi hosts. Are used two network interfaces and three VMKernels (iSCSI01, iSCSI02 and Storage Heartbeat). To run the script you need the VMware PowerCLI, hostname or IP and administrator access credentials vCenter Server, hostname or IP of the ESXi servers, nomenclature for the vSwitch, VMKernels IP and iSCSI network. You must also define which network interfaces assigned to this vSwitch, besides the possibility of set the MTU. The script also enables a network interface for a VMkernel, while the other interface is unused.

script-vswitch-iscsi-2.jpg

script-vswitch-iscsi-3.jpg

 

Below are some details of the script:

 

######################### General Setup ###########################

$vcenter_name = "rc-vc01.rc.local"

$vswitch_name = "iSCSI"

$portgroup1_name = "iSCSI01"

$portgroup2_name = "iSCSI02"

$portgroup3_name = "StorageHeartbeat"

$device_nic1 = "vmnic2"

$device_nic2 = "vmnic3"

$mtu_value = "9000"

$network_mask = "255.255.255.0"

 

script-vswitch-iscsi-4.jpg

 

########################## VMware ESXi 1 ##########################

$esxi1_name = "rc-esxi01.rc.local"

$ip_portgroup1_esxi1 = "10.0.0.11"

$ip_portgroup2_esxi1 = "10.0.0.21"

$ip_portgroup3_esxi1 = "10.0.0.31"

 

# Conect to vCenter Server

Connect-VIServer $vcenter_name

# Create vSwitch in ESXi Hosts

Foreach ($vmhost in (get-vmhost))

{

New-VirtualSwitch -VMHost $vmhost -Name $vswitch_name -Nic $device_nic1,$device_nic2 -MTU $mtu_value

}

 

########################## VMware ESXi 1 ##########################

New-VMHostNetworkAdapter -VMHost $esxi1_name -PortGroup $portgroup1_name -VirtualSwitch $vswitch_name -IP $ip_portgroup1_esxi1 -SubnetMask $network_mask -FaultToleranceLoggingEnabled:$false -ManagementTrafficEnabled:$false -VsanTrafficEnabled:$false -VMotionEnabled:$false -Mtu $mtu_value

New-VMHostNetworkAdapter -VMHost $esxi1_name -PortGroup $portgroup2_name -VirtualSwitch $vswitch_name -IP $ip_portgroup2_esxi1 -SubnetMask $network_mask -FaultToleranceLoggingEnabled:$false -ManagementTrafficEnabled:$false -VsanTrafficEnabled:$false -VMotionEnabled:$false -Mtu $mtu_value

New-VMHostNetworkAdapter -VMHost $esxi1_name -PortGroup $portgroup3_name -VirtualSwitch $vswitch_name -IP $ip_portgroup3_esxi1 -SubnetMask $network_mask -FaultToleranceLoggingEnabled:$false -ManagementTrafficEnabled:$false -VsanTrafficEnabled:$false -VMotionEnabled:$false -Mtu $mtu_value

Get-VirtualPortGroup -VMHost $esxi1_name -Name $portgroup1_name | Get-NicTeamingPolicy | Set-NicTeamingPolicy  -MakeNicActive $device_nic1 -MakeNicUnused $device_nic2

Get-VirtualPortGroup -VMHost $esxi1_name -Name $portgroup2_name | Get-NicTeamingPolicy | Set-NicTeamingPolicy  -MakeNicActive $device_nic2 -MakeNicUnused $device_nic1

 

You can edit the script to use more or fewer hosts, network interfaces and VMKernels. Feel free to edit it according to your need.

 

Script download: EN-Create-vSwitch-iSCSI.ps1

 

References: