Skip navigation

Blog Posts

Total : 4,107

Blog Posts

1 2 Previous Next

In this blog post, we will walk through the steps to configure IOS Mobile SSO.

 

I will be assuming that your Workspace ONE UEM and Workspace ONE Identity Manager environments have not been previously integrated.

 

This blog will assume that you already have an Enterprise Cloud Connector installed and syncing with Workspace ONE UEM.

 

In this blog, we'll cover:

  1. Configure Workspace ONE Identity in the UEM Console
  2. Enable Active Directory Basic
  3. Enable Mobile SSO
  4. Basic Troubleshooting

 

Validation of Pre-requisites

 

  1. Log into Workspace ONE UEM -> Global Settings -> All Settings -> System -> Enterprise Integration -> Cloud Connector
  2. Ensure AirWatch Cloud Connector is enabled
  3. Perform a Test Connection. Make sure the connection is active
    Screen Shot 04-22-19 at 01.33 PM.PNG
  4. Click on Directory Services from the left menu
  5. Ensure your directory has been configured and you can perform a successful test connection
    Screen Shot 04-22-19 at 01.39 PM.PNG
  6. Close from Settings and go to accounts on the main left in Workspace ONE UEM.
  7. Make sure you have users being synchronized into Workspace ONE UEM
    Screen Shot 04-22-19 at 01.42 PM.PNG

 

Step 1: Configure Workspace ONE Identity in the UEM Console

Although this step is not absolutely required to get Mobile SSO working, I highly recommend you configure this as its required for Device Compliance, Unified Catalog and UEM Password Authentication.

In previous versions of Workspace ONE UEM, there was a lot of manual configuration required to enable Workspace ONE Identity.  Using the wizard in Workspace ONE UEM we can automate a lot of these tasks.

 

Click on Getting Started

  1. Under Workspace ONE -> Begin Setup
    Screen Shot 04-22-19 at 01.56 PM.PNG
  2. Under Identity and Access Management -> Click Configure for "Connect to VMware Identity Manager"
    Screen Shot 04-22-19 at 01.58 PM.PNG
  3. Click Continue
    Screen Shot 04-22-19 at 02.01 PM.PNG
  4. Enter your Tenant URL, User name, and Password
    Screen Shot 04-22-19 at 02.03 PM 001.PNG
  5. Click Save
  6. If you check your Workspace ONE Identity tenant, you will see that AirWatch configuration as been completed: Identity & Access Management -> Setup -> AirWatch

 

Step 2: Enable Active Directory Basic

VMware recommends you download and install the VMware Identity Manager connector to synchronize users from your Active Directory to Workspace ONE Identity. However, for the purpose of this blog we are going to leverage to built-in capabilities of Workspace UEM to provision users directly into Workspace ONE Identity.

 

  1. In Workspace ONE UEM, Groups & Settings -> All Settings -> System -> Enterprise Integration -> VMware Identity Manager -> Configuration
  2. You will see under the server settings that "Active Directory Basic" is disabled
    Screen Shot 04-22-19 at 02.18 PM.PNG
  3. Click "Enabled" beside Active Directory Basic
  4. You will be prompted to enter your password
    Screen Shot 04-22-19 at 02.19 PM.PNG
  5. Click Next
  6. Enter a name for your directory (This will be name of the directory in Workspace ONE Identity). You can leave Enable Custom Mapping to standard
    Screen Shot 04-22-19 at 02.21 PM.PNG
  7. Click Save
  8. If everything worked successfully, you should see your a new directory appear in Workspace ONE Identity with your synchronized users:
    Screen Shot 04-22-19 at 02.22 PM.PNG

 

Step 3: Enable Mobile SSO

  1. Lets go back to the "Getting Started Section" of Workspace ONE UEM
  2. Under Workspace ONE -> Continue
  3. Under Identity & Access Management -> Mobile Single-Sign-On, click Configure
    Screen Shot 04-22-19 at 02.33 PM.PNG
  4. Click "Get Started"
    Screen Shot 04-22-19 at 02.35 PM.PNG
  5. Click Configure to use the AirWatch Certificate Authority
    Screen Shot 04-22-19 at 02.38 PM.PNG
  6. Click Start Configuration
    Screen Shot 04-22-19 at 02.40 PM.PNG
  7. Click Finish when complete
    Screen Shot 04-22-19 at 02.41 PM.PNG
  8. Click Close

Basic Troubleshooting

There are a variety of reasons that Mobile SSO can fail. Lets go over a few of the common reasons.

 

  1. You are prompted for a username/password or the Workspace ONE Domain chooser when doing Mobile SSO
    The problem here is that Mobile SSO has failed and Workspace ONE Identity is triggering the fallback authentication mechanism. For the purpose of troubleshooting, I recommend removing the fallback mechanism. In the IOS  Policy, remove Certificate Authentication and Password (Local Directory). When you test again you will be prompted with an error message instead.
    Screen Shot 04-22-19 at 03.22 PM.PNG
  2. You are prompted with an error  message "Access denied as no valid authentication methods were found"
    a) Check to make sure the "Ios_Sso" profile was pushed to the device. By default, when the profile is created it does not have an assignment group. If not, create an smart group and assign the profile and publish.
  3. You received the error "The required field “Keysize” is missing" when deploying the IOS Mobile SSO Profiless
    Something went wrong with the import of the KDC Certificate from Workspace ONE Identity to UEM.
    a)Log into Workspace ONE Identity -> Identity & Access Management -> Identity Providers -> Built-In and download the KDC Certificate:
    Screen Shot 04-22-19 at 04.20 PM.PNG
    b) Now switch back to UEM, Devices -> Profiles & Resources -> Profiles
    c) Edit the IOS Profile
    d) Click Credentials and re-upload the KDC Certificate.

  4. You received the message "Kerberos NEGOTIATE failed or was cancelled by the user"

    Unfortunately this is a catch all error message for mobile sso failures can could be many things. I'll try to cover some of the common reason here:

    a) In Workspace ONE UEM, check your IOS Mobile SSO profile -> Single Sign-on. Verify the Realm is correct. For production it should be "VMWAREIDENTITY.COM". However if you have localized cloud tenant this can be different (VMWAREIDENTITY.EU, VMWAREIDENTITY.ASIA,  VMWAREIDENTITY.CO.UK, VMWAREIDENTITY,COM.AU, VMWAREIDENTITY.CA, VMWAREIDENITY.DE).  For non-production, you might be on the vidmpreview.com domain. If this is the case, it should be "VIDMPREVIEW.COM"

    b) When you use the wizard to create the Mobile SSO configuration, it will automatically add the application bundle id's where Mobile SSO is allowed. You will need to either enter all your application bundle id's into the profile or optionally delete them all. If you don't specify the bundle id's, it will allow them all.  I recommend for a POC, you leave this blank.

    c) Mobile SSO on IOS is based on Kerberos. The kerberos negotiation works of Port 88 on UDP. Ensure that your firewall is not blocking this port.

    d)The built-in AirWatch Certificate Authority uses the username (usually sAMAccountName) as the principal name on the certificate provisioned to the device. The kerberos negotiation will use the username to formulate a user principle name which needs to match in Workspace ONE Identity. A problem can occur when organizations define their UPN with a different prefix than the sAMAcountName. So if my my username is "jdoe" but my UPN is "john.doe@domain.com". In this scenario, Mobile SSO will fail. In this scenario, we can:

    i) Sync the correct UPN prefix as a customer attribute into Workspace UEM and provision that on the certificate
    ii) Sync sAMAccountName as the UPN in Workspace ONE Identity (Note: This can have potential issues with downstream applications but you can always pull the UPN as a custom attribute as well)
    iii) Use a custom certificate authority in Workspace ONE UEM and configure a kerberos template with the correct values.

I recently came across a customer who had many applications running in clusters which required RDM’s and wanted to automate the process of attaching and sharing the RDM’s between multiple Virtual Machines. PowerCLI being the preferred method for the customer to automate anything, I started out by mapping the steps which had to be performed to successfully attach and share an RDM.

  • Find the available free ports on the SCSI Controller and add a new SCSI Controller if required.
  • Create a custom object to hold all the information about the virtual machine, RDM and SCSI Controller Bus and Port Numbers being used.
  • Use the information captured to attach the RDM on the first Virtual Machine.
  • Capture the new Disk information and share the same device to other Virtual Machines.

(Note : All the functions mentioned below have been written with the assumption that all Virtual Machines are identical in terms of existing storage mapped.)

 

  • First thing first – Setup the parameters for the script call.
    1. PrimaryVirtualMachineName – VM on which the RDM will be added initially.
    2. SecondaryVirtualMachinesName – Comma separated virtual machine names of VM’s with which RDM is to be shared.
    3. PathtoRDMfile – Path to the file containing list of RDM WWN’s.

 

param(

        $PrimaryVirtualMachineName,

        $SecondaryVirtualMachinesName = @(),

        $PathtoRDMfile

)

 

  • Now we will create a function which will create a custom RDM object to hold all the information which is required to successfully attach and share an RDM. This is not actually required, but makes if easier to hold all the required information in a single place and makes it easier to retrieve it when required.

 

function GetVMCustomObject {

param (

        $VirtualMachine,

        $RDMS

)  

$ESXCLI = $VirtualMachine | get-vmhost | Get-EsxCli -V2

$devobject = @()

foreach($RDM in $RDMS)

{

       

        $RDM = 'naa.'+$RDM

        $Parameters = $ESXCLI.storage.core.device.list.CreateArgs()

        $Parameters.device = $RDM.ToLower()

        try{

        $naa=$ESXCLI.storage.core.device.list.Invoke($Parameters)

        write-host found device $naa.device

        $device = New-Object psobject

        $device | add-member -MemberType NoteProperty -name "NAAID" -Value $naa.Device

        $device | add-member -MemberType NoteProperty -name "SizeMB" -Value $naa.Size

        $device | add-member -MemberType NoteProperty -name "DeviceName" -Value $naa.devfspath

        $device | Add-Member -MemberType NoteProperty -name "BusNumber" -Value $null

        $device | add-member -MemberType NoteProperty -name "UnitNumber" -value $null

        #$device | Add-Member -MemberType NoteProperty -Name "Device Key" -Value $null

        $device | add-member -MemberType NoteProperty -name "FileName" -Value $null

        $devobject += $device

 

}

catch

    {

        Write-host $RDM does not exist on host (get-vmhost -vm $VirtualMachine)

        Read-Host "Press any key to exit the Script."

        Exit

}

}

return $devobject

}

 

  • Next up is the function to create a new SCSI controller if required.

 

function CreateScSiController {

param (

        [int]$BusNumber,

        $VirtualMachine

)

$spec = New-Object VMware.Vim.VirtualMachineConfigSpec

$spec.DeviceChange = @()

$spec.DeviceChange += New-Object VMware.Vim.VirtualDeviceConfigSpec

$spec.DeviceChange[0].Device = New-Object VMware.Vim.ParaVirtualSCSIController

$spec.DeviceChange[0].Device.SharedBus = 'physicalSharing'

$spec.DeviceChange[0].Device.ScsiCtlrUnitNumber = 7

$spec.DeviceChange[0].Device.DeviceInfo = New-Object VMware.Vim.Description

$spec.DeviceChange[0].Device.DeviceInfo.Summary = 'New SCSI controller'

$spec.DeviceChange[0].Device.DeviceInfo.Label = 'New SCSI controller'

$spec.DeviceChange[0].Device.Key = -106

$spec.DeviceChange[0].Device.BusNumber = $BusNumber

$spec.DeviceChange[0].Operation = 'add'

$VirtualMachine.ExtensionData.ReconfigVM($spec)

}

 

 

  • Next we will query the existing SCSI controller attached and find the available free ports on the existing SCSI controller and use the function last created to add a new SCSI controller if required.

Note : This function has been written to always start with SCSI controller with highest Bus Number but could be easily modified to use any of the existing controllers.

 

function SCSiFreePorts {

param (

        #Required ports is RDMS.count

$RequiredPorts,

        $PrimaryVirtualMachine,

        $SecondaryVirtualMachines

)

 

$ControllertoUse = @()

$FreePorts = 0;

$AvailablePorts = @()

while ($FreePorts -lt $RequiredPorts) {

        $ControllerNumber = @()

        $Controllers = Get-ScsiController -vm $PrimaryVirtualMachine? {$_.BusSharingMode -eq 'Physical' -and $_.Type -eq 'paravirtual'}

        $LatestControllerNumber = $null

if ($Controllers) {

            foreach ($Controller in $Controllers) {

$ControllerNumber += $Controller.ExtensionData.BusNumber

            }

            $LatestControllerNumber = ($ControllerNumber | measure -Maximum).Maximum

$RecentController = $Controllers | ? {$_.ExtensionData.BusNumber -eq $LatestControllerNumber}

            $FreePorts += 15 - $RecentController.ExtensionData.Device.count

            $ControllertoUse += $RecentController

        }

        if (($FreePorts -lt $RequiredPorts) -and ($LatestControllerNumber -eq 3)) {

            Write-Host "SCSI controller Limit has been exhausted and can not accomodate all RDM's. Exiting the Script."

            Exit

        }

        if (($FreePorts -lt $RequiredPorts) -or !$Controllers) {

            CreateScSiController -BusNumber ($LatestControllerNumber+1) -VirtualMachine $PrimaryVirtualMachine

            foreach($Virtualmachine in $SecondaryVirtualMachines)

            {

                CreateScSiController -BusNumber ($LatestControllerNumber+1) -VirtualMachine $Virtualmachine

            }

}

}

foreach ($CurrentController in $ControllertoUse) {

        $ConnectedDevices = $CurrentController.ExtensionData.Device

        $UsedPort = @()

        foreach ($Device in $ConnectedDevices) {

            $DevObj = $PrimaryVirtualMachine.ExtensionData.Config.Hardware.Device | ? {$_.Key -eq $Device}

            $UsedPort += $DevObj.UnitNumber

        }

        for ($i = 0; $i -le 15; $i++) {

            if (($i -ne 7) -and ($UsedPort -notcontains $i)) {

                $PortInfo = New-Object -TypeName PSObject

                $PortInfo | Add-Member -MemberType NoteProperty -name "BusNumber" -Value $CurrentController.ExtensionData.BusNumber

                $PortInfo | add-member -MemberType NoteProperty -name "PortNumber" -value $i

                $AvailablePorts += $PortInfo

            }

        }

}

return $AvailablePorts

}

 

  • Now the function to add the RDM to the shared machine.

 

function AddRDM {

param (

        $VirtualMachine,

        [String]$DeviceName,

[Int]$ControllerKey,

        [Int]$UnitNumber,

        [Int]$Size

)

$spec = New-Object VMware.Vim.VirtualMachineConfigSpec

$spec.DeviceChange = @()

$spec.DeviceChange += New-Object VMware.Vim.VirtualDeviceConfigSpec

$spec.DeviceChange[0].FileOperation = 'create'

$spec.DeviceChange[0].Device = New-Object VMware.Vim.VirtualDisk

# $SIZE is available in objects returned by GetVMCustomObject, size will be in MB

$spec.DeviceChange[0].Device.CapacityInBytes = $Size*1204*1024

$spec.DeviceChange[0].Device.StorageIOAllocation = New-Object VMware.Vim.StorageIOAllocationInfo

$spec.DeviceChange[0].Device.StorageIOAllocation.Shares = New-Object VMware.Vim.SharesInfo

$spec.DeviceChange[0].Device.StorageIOAllocation.Shares.Shares = 1000

$spec.DeviceChange[0].Device.StorageIOAllocation.Shares.Level = 'normal'

$spec.DeviceChange[0].Device.StorageIOAllocation.Limit = -1

$spec.DeviceChange[0].Device.Backing = New-Object VMware.Vim.VirtualDiskRawDiskMappingVer1BackingInfo

$spec.DeviceChange[0].Device.Backing.CompatibilityMode = 'physicalMode'

$spec.DeviceChange[0].Device.Backing.FileName = ''

$spec.DeviceChange[0].Device.Backing.DiskMode = 'independent_persistent'

$spec.DeviceChange[0].Device.Backing.Sharing = 'sharingMultiWriter'

#Device name is in the format /vmfs/devices/disks/naa.<LUN ID>

$spec.DeviceChange[0].Device.Backing.DeviceName = $DeviceName

#Controller key to be retrieved at run time using controller bus number

$spec.DeviceChange[0].Device.ControllerKey = $ControllerKey

#Unit number is the controller port and will be provided by SCSiFreePorts function

$spec.DeviceChange[0].Device.UnitNumber = $UnitNumber

# $SIZE is available in objects returned by GetVMCustomObject, size will be in MB

$spec.DeviceChange[0].Device.CapacityInKB = $Size*1204

$spec.DeviceChange[0].Device.DeviceInfo = New-Object VMware.Vim.Description

$spec.DeviceChange[0].Device.DeviceInfo.Summary = 'New Hard disk'

$spec.DeviceChange[0].Device.DeviceInfo.Label = 'New Hard disk'

$spec.DeviceChange[0].Device.Key = -101

$spec.DeviceChange[0].Operation = 'add'

return $VirtualMachine.ExtensionData.ReconfigVM_Task($spec)

}

 

  • To share the RDM between Virtual Machines, we will use below function.

 

function ShareRDM {

param (

        $VirtualMachine,

        [String]$FileName,

        [Int]$ControllerKey,

        [Int]$UnitNumber,

        [Int]$Size

)

$spec = New-Object VMware.Vim.VirtualMachineConfigSpec

$spec.DeviceChange = @()

$spec.DeviceChange += New-Object VMware.Vim.VirtualDeviceConfigSpec

$spec.DeviceChange[0] = New-Object VMware.Vim.VirtualDeviceConfigSpec

$spec.DeviceChange[0].Device = New-Object VMware.Vim.VirtualDisk

# $SIZE is available in objects returned by GetVMCustomObject, size will be in MB

$spec.DeviceChange[0].Device.CapacityInBytes = $Size*1204*1024*1024

$spec.DeviceChange[0].Device.StorageIOAllocation = New-Object VMware.Vim.StorageIOAllocationInfo

$spec.DeviceChange[0].Device.StorageIOAllocation.Shares = New-Object VMware.Vim.SharesInfo

$spec.DeviceChange[0].Device.StorageIOAllocation.Shares.Shares = 1000

$spec.DeviceChange[0].Device.StorageIOAllocation.Shares.Level = 'normal'

$spec.DeviceChange[0].Device.StorageIOAllocation.Limit = -1

$spec.DeviceChange[0].Device.Backing = New-Object VMware.Vim.VirtualDiskRawDiskMappingVer1BackingInfo

#FileName is the disk filename to be shared in [<Datastore name>] VM Name/disk name.vmdk, to be retrieved at runtime using vm view and device bus number and Unit number

$spec.DeviceChange[0].Device.Backing.FileName = $FileName

$spec.DeviceChange[0].Device.Backing.DiskMode = 'persistent'

$spec.DeviceChange[0].Device.Backing.Sharing = 'sharingMultiWriter'

#Controller key to be retrieved at run time using controller bus number

$spec.DeviceChange[0].Device.ControllerKey = $ControllerKey

#Unit number is the controller port and will be provided by SCSiFreePorts function

$spec.DeviceChange[0].Device.UnitNumber = $UnitNumber

# $SIZE is available in objects returned by GetVMCustomObject, size will be in MB

$spec.DeviceChange[0].Device.CapacityInKB = $Size*1204*1024

$spec.DeviceChange[0].Device.DeviceInfo = New-Object VMware.Vim.Description

$spec.DeviceChange[0].Device.DeviceInfo.Summary = 'New Hard disk'

$spec.DeviceChange[0].Device.DeviceInfo.Label = 'New Hard disk'

$spec.DeviceChange[0].Device.Key = -101

$spec.DeviceChange[0].Operation = 'add'

return $VirtualMachine.ExtensionData.ReconfigVM_Task($spec)

}

 

To stitch it all together we just have to call the created functions as and when required, but a little pre-checks first.

Note : These pre-checks are not exhaustive, these were built in to satisfy customer specific requirements, more checks and balances could be added.

 

Now we do not want to just start modifying things without making sure that the virtual machines are powered off, do we?

 

$PrimaryVirtualMachine = Get-VM -Name $PrimaryVirtualMachineName

if($PrimaryVirtualMachine.PowerState -ne 'PoweredOff')

{

Read-Host -Prompt $PrimaryVirtualMachineName' is not Powered Off. Make sure all the Virtual Machines are Powered Off before running the script again. Press any key to exit.'

Exit

}

$SecondaryVirtualMachines = @()

foreach($VM in $SecondaryVirtualMachinesName)

{

$SecondaryVM = Get-VM -name $VM

if($SecondaryVM.PowerState -ne 'PoweredOff')

{

Read-Host -Prompt $VM' is not Powered Off. Make sure all the Virtual Machines are Powered Off before running the script again. Press any key to exit.'

Exit

}

$SecondaryVirtualMachines += $SecondaryVM

}

 

We also know that Virtual Machines support a total of 64  disks out of which 4 are IDE’s, so we will check if we have enough ports available to successfully attach all the given RDM’s.

$RDMS = Get-Content -path $PathtoRDMfile

$AttachedDisks = $PrimaryVirtualMachine | Get-HardDisk

if(($AttachedDisks.Count+$RDMS.count) -gt 60)

{

Read-Host -Prompt 'Configuration maximum for disks reached. Can not attach all provided disks. Press any key to exit.'

exit

}

 

Lets find out the Bus Numbers and Port Number that we will use to attach the RDM’s

 

$DeviceObjects = GetVMCustomObject -VirtualMachine $PrimaryVirtualMachine -RDMS $RDMS

$PortsAvailable = SCSiFreePorts -RequiredPorts $RDMS.Count -PrimaryVirtualMachine $PrimaryVirtualMachine -SecondaryVirtualMachines $SecondaryVirtualMachines

 

for($i = 0; $i -lt $RDMS.Count; $i++)

{

$CurrentObject = $DeviceObjects[$i]

$PorttoUse = $PortsAvailable[$i]

$CurrentObject.UnitNumber = $PorttoUse.PortNumber

$CurrentObject.BusNumber = $PorttoUse.BusNumber

}

 

Now we will use all this collected information and finish the job.

 

foreach($DiskObject in $DeviceObjects)

{

$Controller = Get-ScsiController -VM $PrimaryVirtualMachine | ? {$_.ExtensionData.BusNumber -eq $DiskObject.BusNumber}

$task = AddRDM -VirtualMachine $PrimaryVirtualMachine -DeviceName $DiskObject.DeviceName -ControllerKey $Controller.ExtensionData.Key -UnitNumber $DiskObject.UnitNumber -Size $DiskObject.SizeMB

Start-Sleep -Seconds 5

$PVM = Get-VM -Name $PrimaryVirtualMachineName

$Disk = $PVM.ExtensionData.Config.Hardware.Device | ? {($_.UnitNumber -eq $DiskObject.UnitNumber) -and ($_.ControllerKey -eq $Controller.ExtensionData.Key)}

$DiskObject.FileName = $Disk.Backing.FileName

foreach($VM in $SecondaryVirtualMachines)

{

        $SController = Get-ScsiController -VM $PrimaryVirtualMachine | ? {$_.ExtensionData.BusNumber -eq $DiskObject.BusNumber}

        ShareRDM -VirtualMachine $VM -FileName $Disk.Backing.FileName -ControllerKey $SController.ExtensionData.Key -UnitNumber $DiskObject.UnitNumber -Size $DiskObject.SizeMB

 

}

 

}

Write-Host "RDM's have been added on All VirtualMachines with Below Details"

Write-Host $DeviceObjects | Select NAAID,BusNumber,UnitNumber

 

Now just save the file and run is as below -

 

<path the script><scriptname.ps1> -PrimaryVirtualMachineName <VM Name> -SecondaryVirtualMachinesName  <VM Name>,<VM Name>,<VM Name> -PathtoRDMfile  <RDM File Path>

 

This script has been tested in Lab with 1 Primary and 2 Secondary Virtual Machines and upto 10 RDM devices. Below are the specific use cases tested.

 

  • RDM attachment with no existing Physical mode SCSI Controller.
  • RDM attachment with existing Physical mode SCSI Controller with no existing RDM.
  • RDM attachment with existing Physical mode SCSI controller with serially attached RDM.
  • RDM attachment with existing Physical mode SCSI controller with randomly attached RDM.
  • RDM attachment across multiple Physical mode SCSI controllers if the existing controller does not have enough ports available.

 

I understand there could be better and/or easier ways to do so, the script might also be modified to be more efficient, any suggestion is welcome, I have attached the completed script to the post, feel free to use/modify as deemed fit.

We come across the scenario quite often when customers want to leverage Microsoft Authenticator when using Workspace ONE UEM and/or Horizon.

 

In this blog, I'd like to go through the various options and outline the user experience with each of the options.

 

The  main uses case we see are:

 

  • Microsoft MFA for Horizon Desktop
  • Microsoft MFA for SaaS Applications federated directly with Workspace ONE.
  • Microsoft MFA for Device Enrollment in Workspace ONE UEM
  • Microsoft MFA for SaaS Applications federated with Azure AD. (Including Office 365)

 

There are 3 integration options that you can consider to integrate Microsoft Authenticator with Workspace ONE. The use cases previously mentioned can fit into one ore more of the following integration options.

 

1. Azure AD as a 3rd Party IdP in Workspace ONE

 

Use Cases:

  • Microsoft MFA for Horizon Desktop
  • Microsoft MFA for SaaS Applications federated directly with Workspace ONE.
  • Microsoft MFA for Device Enrollment in Workspace ONE UEM

 

Use Cases not Supported:

  • Microsoft MFA for SaaS Applications federated with Azure AD. (Including Office 365)

 

 

In this option, the following needs to be configured:

  • Azure AD configured as a 3rd Party IdP in Workspace ONE
  • Workspace ONE configured as an enterprise app in Azure
  • Conditional Access Policy Configured in Azure AD to require Microsoft Authenticator for the Workspace ONE Application.

 

Screen Shot 04-17-19 at 03.11 PM.PNG

Lets walk through the authentication flow in this option:

  1. The user will access their Horizon Desktop (or any application that is federated directly with Workspace ONE).

  2. The application will send a SAML Authentication Request to Workspace ONE
  3. Assuming the access policy in Workspace ONE is configured for Azure Authentication, the user will be redirected to Azure AD.
  4. The user will enter their email address.
  5. Assuming the domain is not currently federated with another IdP, Azure will prompt the user to enter their password.
  6. Azure conditional access policies will then trigger for Microsoft MFA.
  7. The user will be returned to Workspace ONE and subsequently authenticated to Horizon. (Note: Horizon should be configured with TrueSSO for optimal user experience).

 

2. Workspace ONE as a Federated Domain in Azure AD

 

Use Cases:

  • Microsoft MFA for SaaS Applications federated with Azure AD. (Including Office 365)

 

 

Use Cases not supported:

  • Microsoft MFA for Horizon Desktop
  • Microsoft MFA for SaaS Applications federated directly with Workspace ONE.
  • Microsoft MFA for Device Enrollment in Workspace ONE UEM

 

 

 

In this option, the following needs to be configured:

  • Azure domain must be federated to Workspace ONE
  • Conditional Access Policy Configured in Azure AD to require Microsoft Authenticator for the Workspace ONE Application.
  • Mobile SSO/Certificate Authentication Configured in Workspace ONE

Screen Shot 04-17-19 at 05.29 PM.PNG

Lets walk through the authentication flow in this option:

  1. The user will access Office 365 (or any application federated with Azure AD).
  2. The user will enter their email address.
  3. The user will be redirected to Workspace ONE
  4. Workspace ONE will authenticate the user using Mobile SSO, Certificate or some other authentication mechanism (as well as checking device compliance).
  5. Workspace ONE will respond with a successful response back to Azure AD.
  6. Azure conditional access policies will then trigger for Microsoft MFA.
  7. The user will be successfully authenticated into Office 365 (other other Azure federated application).

 

3. Workspace ONE with Microsoft Azure MFA Server

 

Use Cases:

  • Microsoft MFA for Horizon Desktop
  • Microsoft MFA for SaaS Applications federated directly with Workspace ONE.
  • Microsoft MFA for Device Enrollment in Workspace ONE UEM
  • Microsoft MFA for SaaS Applications federated with Azure AD. (Including Office 365)*

          *For Office 365 (and other apps federated with Azure), the Azure domain must be federated with Workspace ONE.

 

Use Cases not supported:

  • N/A

 

In this option, the following needs to be configured:

  • Azure MFA Server downloaded and installed on premises.
  • Workspace ONE Connector installed on premise.
  • Workspace ONE configured as a radius client in Azure MFA Server

 

 

Screen Shot 04-17-19 at 05.41 PM.PNG

Lets walk through the authentication flow in this option:

  1. The user will access any application federated with Workspace (or Horizon/Citrix application).
  2. Workspace ONE will prompt for their username/password
  3. After clicking "Sign-In", a radius call via the connector will be made to the Microsoft Azure MFA Server
  4. The MFA server will push a notification to the device to approve the request:

If you have configured Okta as a 3rd Party IDP in Workspace ONE you might have noticed that the "Logout" function in Workspace ONE doesn't log you out of your Okta session. The reason for this is that Okta does not include the "SingleLogoutService" by default in the metadata that is used when creating the 3rd Party IDP in Workspace ONE.

 

There are a couple extra steps that you need to do to enable this functionality. Before you begin, please make sure you download your signing certificate from Workspace ONE.

 

  1. Log into Workspace ONE
  2. Click on Catalog -> Settings (Note: Don't click the down arrow and settings)
    Screen Shot 04-17-19 at 10.55 AM.PNG
  3. Click on SAML Metadata
  4. Scroll down to the Signing Certificate and Click Download
    Screen Shot 04-17-19 at 11.01 AM.PNG

Now you will need to log into your Okta Administration Console.

  1. .Under Applications -> Click on the Workspace ONE application that you previously created
    Screen Shot 04-17-19 at 11.04 AM.PNG
  2. Click on the General Tab
  3. Under SAML Settings -> Click Edit
  4. Click Next
  5. Click on "Show Advanced Settings"
    Screen Shot 04-17-19 at 11.06 AM.PNG
  6. Enable the Checkbox that says "Enable Single Logout"
    Screen Shot 04-17-19 at 11.07 AM.PNG
  7. Under "Single Logout URL", enter:  "https://[WS1Tenant]/SAAS/auth/saml/slo/response"
    Screen Shot 04-17-19 at 11.09 AM.PNG
  8. Under SP Issuer, copy the value you have configured for Audience URI (SP Entity ID). This value should be: "https://[WS1Tenant]/SAAS/API/1.0/GET/metadata/sp.xml"
    Screen Shot 04-17-19 at 11.12 AM.PNG
  9. Under "Signature Certificate", browse to the location you downloaded the Workspace ONE certificate in the previous steps.
  10. Click Upload Certificate
  11. Click Next
  12. Click Finish
  13. Click on the "Sign On" tab
  14. Click on Identity Provider Metadata
    Screen Shot 04-17-19 at 11.15 AM.PNG
  15. You will notice that your Identity Provider Metadata now includes the SingleLogoutService:
    Screen Shot 04-17-19 at 11.19 AM.PNG
  16. Copy this metadata.

 

Now switch back to Workspace ONE

 

  1. Go to Identity & Access Management
  2. Click on Identity Providers
  3. Click on your Okta 3rd Party IDP you previously created
  4. Paste your new Okta Metadata and click "Process IdP Metadata"
    Screen Shot 04-17-19 at 11.22 AM.PNG
  5. Scroll down to "Single Sign-out Configuration" and check "Enable". (Note: Make sure the other two values are left blank)
    Screen Shot 04-17-19 at 11.24 AM.PNG

Now you should be able to logout from Workspace ONE and be signed out of both solutions.

 

Screen Shot 04-17-19 at 11.25 AM.PNG

VMware's Workspace ONE provides a digital workspace platform with a seamless user experience across any application on any device. Users can access a platform native catalog to download and install applications regardless of whether its an IOS, Android, Win10 or MacOS device. They can access both Web and SaaS applications as well as their Virtualized applications including Horizon and Citrix.  Workspace ONE is designed to keep the user experience "Consumer Simple" while keeping the platform "Enterprise Secure".

 

VMware promotes the "Zero-Trust" approach when accessing corporate applications. Workspace ONE Unified Endpoint Management is a critical element to achieve a zero-trust model to ensure the device itself is secure enough to access your corporate data.  However, to achieve a zero-trust model we need to include both the Device Trust and the Identity Context.  This is where the Risk-Based Identity Assurance offered by RSA SecurID Access becomes the perfect complement to Workspace ONE.

 

RSA SecurID Access makes access decisions based on sophisticated machine learning algorithms that take into consideration both risk and behavioral analytics. RSA SecurID Access offers a broad range of authentication methods including modern mobile multi-factor authenticators (e.g., push notification, one-time password, SMS and biometrics) as well as traditional hard and soft tokens.

 

I'm pretty excited about the integration between Workspace ONE and RSA SecurID Access because its offers extreme flexibility to control when and how multi-factor authentication will be used. After the initial setup, it also allows me to control everything from Workspace ONE.

 

RSA SecurID Access provides 3 levels of assurance that you can leverage within your access policies. You have full control to modify the authenticators into the appropriate levels based on your licensing from RSA.

 

Screen Shot 04-15-19 at 02.09 PM.PNG

 

You can create Access Policies in RSA SecurID Access that will map to the appropriate assurance levels:

 

Screen Shot 04-15-19 at 02.14 PM.PNG

 

In my environment, I've created 3 policies:

Screen Shot 04-15-19 at 03.09 PM.PNG

Once you've completed your access polices you can then add your Workspace ONE tenant as an relying party.

 

Screen Shot 04-15-19 at 05.11 PM.PNG

 

Now this is where things get really interesting and you'll see why i'm excited about this integration. Its fairly common for a digital workspace or web portal to call out to an MFA provider to perform the necessary authentication and return the response. The problem that typically comes into play is whether the authenticators being used for MFA are too much or too little for the application being accessed.  In most cases, the MFA provider is not aware of what application is being accessed and is only responding the call from the relying party.  Keep in mind that "User Experience" is at the forefront of the Workspace ONE solution.

 

The integration between Workspace ONE and RSA SecurID Access allows us to control which Access Policy (or level of assurance) will be used from within Workspace ONE.

 

In Workspace ONE, we can create the same policies that we did in RSA SecurID Access:

Screen Shot 04-15-19 at 02.46 PM.PNG

 

In Workspace ONE we can directly assign Web, SaaS or Virtual applications that require High Assurance into the "High Assurance" access policy and apps that require "Medium or Low Assurance" into the appropriate policy. When applications are accessed in Workspace ONE, it will automatically send the request to RSA SecurID Access with the requested policy to use for authentication.

 

So how does Workspace ONE specify which policy RSA SecurID should use for authentication? Its actually quite simple.  The integration between Workspace ONE and RSA SecurID Access is based on SAML.

 

Initial authentication into Workspace ONE will typically come from Mobile SSO or Certificate Based Authentication (although other forms of authentication are available). After the initial authentication or once the user clicks on a specific application, Workspace ONE will send a SAML Authentication Request which will include the subject who needs additional verification:

 

<saml:Subject xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion">

        <saml:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified">steve</saml:NameID>

</saml:Subject><samlp:NameIDPolicy AllowCreate="false"

 

When the SAML Request is sent from Workspace ONE, it will also include the access policy as part of the SAML AuthnContextClassRef:

 

<saml:AuthnContextClassRef xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion">urn:rsa:names:tc:SAML:2.0:ac:classes:spec::LowWS1</saml:AuthnContextClassRef>

</samlp:RequestedAuthnContext>

 

You can see in the AuthnContextClassRef we are specifying the specific policy that RSA SecurID Access should use for authentication. 

 

When you create a 3rd Party IDP for RSA SecurID Access, you can specify the AuthnContextClassRef when defining the authentication methods:

Screen Shot 04-15-19 at 05.02 PM 001.PNG

Screen Shot 04-15-19 at 05.03 PM.PNG

 

I've actually left out a key element of the RSA SecurID Access solution, which is the Risk Level. Even though we've specifically called out the Low Assurance Policy, we can have RSA dynamically change that to High based on the user's risk score. RSA SecurID Access can use an "Identity Confidence" score to choose the appropriate assurance level. This is configured in the access policy:

 

Screen Shot 04-17-19 at 01.45 PM.PNG

 

By leveraging RSA SecurID Access with VMware Workspace ONE we can now have risk-based identity assurance on a per app level within Workspace ONE. For current Workspace ONE customers, this integration is based on SAML so it does not require radius and has no dependency on the VIDM Connector.

 

Together this keeps the user experience great on apps that might not need a high level of assurance and keeps the enterprise secure on the apps that require the high level of assurance.

Hi all,

so...i figured something out and wanted to share it with you guys. vRA 7.5 (Hotfix4 VMware Knowledge Base ) has come a long way in regards of custom forms. However there are still some things that are annoying, such as that one must imput Memory in MB. I undertsand that that is a left over from the vCenter API...but seriously...its 2019!

 

So I finaly came up with a workaround (requires custom forms, vRA 7.5 Hotfix >=4)

1) create a vRO action (I call it memGB2MB) that looks like this:

Header 1Header 2Header 3
InputmemGBNumber
OutputNumber
Scriptreturn memGB*1024;

 

2) Create an custom form of your  blueprint and drag at least CPU and Mem on it.

3) Add a new Integer field and call it Memory (GB). Then assign it all the restrictions that the blueprint has in regards of Memory.
(If you havent noticed yet. The blueprint restrictions are transfered ONLY once, when creating a custom form. If you update the Mem or CPU limits in the blueprint later, they are not passed dynamically to the custom form.)

4) Now click on the field Memory (MB) field and go to values. Select external source and then the vRO action you have created and Field Memory GB as the input.

5) You can give the whole thing a go now..or just directy set the Memory (MB) fields Visibility to false.

Well..there is more.

 

The method would allow you to use a Array/String to display a dropdown menu for CPU and/or Memory. You just need a new action that has as an input a string, as the output a number and the following script:

return parseInt(memGB*1024,10);

 

have Fun!

vSAN 6.7 U1 では、「仮想マシン ストレージ ポリシー」を考慮した

データストア空き容量の確認ができるようになっています。

 

下記のように、vSAN データストアの「容量の概要」画面の

「ポリシーで使用可能な容量」で仮想マシンが利用できる空き容量を確認できます。

スクリーンショットでは、デフォルトのポリシー「vSAN Default Storage Policy」で算出されています。

vsan-usage-01.png

 

そこで、別の仮想マシンストレージポリシーで使用可能容量を確認してみます。

デフォルトのポリシーでは「許容される障害の数:1 件の障害 - RAID-1(ミラーリング)」なので、

あえて、「policy-vsan-raid0」という名前で「データの冗長性なし」のポリシーを作成しました。

vsan-usage-02.png

 

「容量の概要」画面でこのポリシーを指定すると、このポリシーを利用した場合に

ポリシーで使用可能な容量が 2倍になることがわかります。(ただし冗長性はありません)

vsan-usage-03.png

 

実は、この情報は PowerCLI でも確認できるようになりました。

PowerCLI 11.2 Released, with more goodness for vSAN!

 

そこで、PowerCLI でも 同様の空き容量確認をしてみます。

 

今回は、PowerCLI 11.2 を利用しています。vCenter には既に接続ずみです。

PowerCLI> Import-Module VMware.PowerCLI

PowerCLI> Get-Module VMware.PowerCLI | select Name,Version

 

Name            Version

----            -------

VMware.PowerCLI 11.2.0.12483598

 

 

PowerCLI では、Get-VsanSpaceUsage で vSAN の容量情報を確認できます。

※ infra-cluster-01 は、vSAN クラスタの名前を指定しています。

 

ただし、下記にあるような FreeSpaceGB や CapacityGB には、

今回確認しているポリシーをもとにした空き容量が反映されません。

そこで、VsanWhatIfCapacity プロパティを確認します。

PowerCLI> Get-VsanSpaceUsage -Cluster infra-cluster-01

 

Cluster              FreeSpaceGB     CapacityGB

-------              -----------     ----------

infra-cluster-01     2,911.358       4,657.552

 

 

特に仮想マシン ストレージ ポリシーを指定していない場合、

VsanWhatIfCapacity は情報を持ちません。

PowerCLI> Get-VsanSpaceUsage -Cluster infra-cluster-01 | select -ExpandProperty VsanWhatIfCapacity

PowerCLI>

 

デフォルトのポリシーを指定した場合です。

PowerCLI> Get-VsanSpaceUsage -Cluster infra-cluster-01 -StoragePolicy "vSAN Default Storage Policy" | select -ExpandProperty VsanWhatIfCapacity | Format-List

 

StoragePolicy         : vSAN Default Storage Policy

TotalWhatIfCapacityGB : 2328.77610206604

FreeWhatIfCapacityGB  : 1455.28823816683

 

 

今回作成した「policy-vsan-raid0」を指定すると、結果に反映されます。

PowerCLI> Get-VsanSpaceUsage -Cluster infra-cluster-01 -StoragePolicy "policy-vsan-raid0" | select -ExpandProperty VsanWhatIfCapacity | Format-List

 

StoragePolicy         : policy-vsan-raid0

TotalWhatIfCapacityGB : 4657.55220413208

FreeWhatIfCapacityGB  : 2910.5833046427

 

 

独自の仮想マシン ストレージ ポリシーを作成した時などに

確認手順に取り込んでおくと便利かもしれないと思いました。

 

以上、vSAN データストアの空き容量確認についての話でした。

vRealize Operations is an efficient monitoring tool which enables integration of many endpoints such as: Vcenter , VSAN , Network devices , VRA , Database etc...

 

vROps helps in monitoring entire Infrastructure that you integrate , enables to configure alerts and provide recommendations as a precautionary measure.

For any such endpoint to be monitored from vROps, we would have to perform below steps "

 

1.Download Management pack .

2.Install management pack on vROps.

3.configure it and vROps will start monitoring .

 

Lets know the  tips to check VRA adapter issue.

 

1.Do we have compatible VRA adapter version used and installed

Find your management pack and doc here: VMware Solution Exchange

Search for the pack name --> click on support --> you find docs and list of compatible version with respective pack.

 

2.vRealize Automation Appliance URL used :

Simple deployment use VRA appliance NAME /URL

Distributed set up with LB : Use your LB NAME /URL

NOTE: Prefer DNS name always instead of IP

 

3.Tenant :

Here you can give specific Tenant name / use * symbol to include all tenants.

 

4.Credentials provided while configuring VRA adapter.

*Sysadmin : basically who installs VRA should have System wide role as well as Tenant admin role

*The super user must have the following privileges in VRA :

  • Infrastructure administrator rights for all tenants.
  • Infrastructure architect rights for all tenants.
  • Tenant administrator rights for all tenants.
  • Software architect roles for all tenants.
  • Fabric group administrator rights for all fabric groups, in all tenants.

This should take care of your VRA adapter configuration part, There is another interesting part and thats connectivity .

 

5.Connectivity/Communication between VRA and vROps node

 

If VRA is deployed in Simple Set up then its  very simple :

*vROps node which has VRA solution installed should have connectivity to IAAS web node mainly.

*vROps node which has VRA solution installed should have connectivity to IAAS Manager Node and VRA appliance .

 

If VRA is deployed in Distributed set up with LB below are the checks :

 

*vROps node which has VRA solution installed should have connectivity to LB used between VRA appliances

*vROps node which has VRA solution installed should have connectivity to LB used between IAAS windows servers.

 

*Internally LB should have connectivity to VRA appliances

*LB used between IAAS nodes should have connectivity to IAAS nodes used ( Web and manager)

 

Simple command to check and validate this is:
Curl -v https://<FQDN>

 

This should resolve most of the issues that we hit with VRA adapter failure on vROps and as always Never forget to check logs

 

Mainly analytics and adapter logs .

 

Happy Learning.....

 

 

 

 

 



FROM THE EDITORS VIRTUAL DESK
Hi everyone, it has been a while since our last newsletter and of course there has been so much news in the past few weeks. One of the major topics that is progressing at a rapid rate is the VMware Cloud on AWS (VMC) solution that we have touched on in the past. I would encourage you to check out as many resources as you can and try to keep track of the ever changing landscape in this area which is moving forward at a rapid speed. One of my best resources is the VMware Cloud on AWS Roadmap which we publish and keep updated online. You can check it out here!

I wish you all a fantastic week ahead!

Virtually Yours
VMware TAM Team

Twitter | Facebook | Archive
-
NEXT TAM WEBINAR
March 2019 – VMware IT’s Journey with Horizon: Windows Virtual Desktop

Date: Thursday, March 21st
Time: 11:00am EST/ 10:00am CST/ 8:00am PST
Duration: 1.5 Hour

Synopsis:
VMware IT transformed the way they deliver and manage the virtual desktop environments and published application platforms. In this session we will share our journey of Windows virtual delivery and lifecycle management in an enterprise environment. We will discuss challenges and lessons learned during the migrations as well as the benefits that we achieved. We will discuss the persistent desktop experience in a non-persistent virtual desktop platform in Horizon 7.

Guest speaker:
Aju Sukumaran is an Information Systems Sr. Manager in VMware’s Colleague Experience & Technology Group. Currently he is working on deploying VMWare's End User Computing products in VMware IT’s environments.

Registration Link:
https://vmware.zoom.us/webinar/register/WN_04nKXeG0SwyOUdnxBuHQ9A

NEWS AND DEVELOPMENTS FROM VMWARE

Open Source Blog

Network Virtualization Blog

vSphere Blog

Cloud management Blog

Cloud Native Blog

EUC Blog

vCloud Foundation Blog

EXTERNAL NEWS FROM 3RD PARTY BLOGGERS

Virtually Ghetto

ESX Virtualization

Cormac Hogan

Scott's Weblog

vSphere-land

NTPRO.NL

Virten.net

vinfrastructure

  • VMware vExpert 2019
    Reading Time: 3 minutes This year the vExpert 2019 announce has taken much time compared with the vExpert 2018 ann...
  • March 2019 IT events
    Reading Time: 1 minute Interesting European IT events: Gartner CIO Leadership Forum – London (Mar 4-6) Gartner Data  ...
  • Introducing VMware Essential PKS
    Reading Time: 3 minutes Kubernetes (k8s) is an open-source system for automating deployment, scaling, and management of...
  • Veeam Backup & Replication 10th birthday!
    Reading Time: 2 minutes Ten years ago, on Feb. 26, Veeam Backup & Replication 1.0 was introduced at VMworld Europe ...
  • Veeam ONE Community Edition
    Reading Time: 2 minutes With the new Veeam Availability Suite 9.5 Update 4, not only the Veeam Backup&Replication F...

Nukescloud

vSwitchZero

vNinja

VMExplorer


DISCLAIMER
While I do my best to publish unbiased information specifically related to VMware solutions there is always the possibility of blog posts that are unrelated, competitive or potentially conflicting that may creep into the newsletter. I apologize for this in advance if I offend anyone and do my best to ensure this does not happen. Please get in touch if you feel any inappropriate material has been published. All information in this newsletter is copyright of the original author. If you are an author and wish to no longer be used in this newsletter please get in touch.

© 2018 VMware Inc. All rights reserved.



FROM THE EDITORS VIRTUAL DESK
Hi everyone, as February unfolds one of the items I would like to point you all to is the VMware Learning Zone. Getting certified and attending quality training is important for all of us but it is really hard to get to either live on-line or in-person classes. This is where the VMware Learning Zone comes in. If you head over to https://www.vmware.com/education-services/learning-zone.html you will find options for VMware training and certification that I am sure will be very useful. You can purchase individual courses as well as a full subscription. The subscription option I think will appeal to most and comes in 3 flavors. There is the Free Basic subscription which provides access to over 700 technical videos, 58 short courses as well as training for the VMware Certified Associate. Next is the Standard Learning Subscription and finally the Premium Learning Subscription. There should be a subscription offer no matter what your requirements. Have a chat to your VMware TAM or VMware representative to help with more information and let's make this the year that we all get more certified.

Virtually Yours
VMware TAM Team

Twitter | Facebook | Archive
-
VMWARE TAM CUSTOMER WEBINAR
February 2019 – vSAN 2 Node and Stretched Clusters Deep Dive

Date: Thursday, February 21st
Time: 11:00am EST/ 10:00am CST/ 8:00am PST
Duration: 1.5 Hour

Synopsis:
vSAN 2 Node and Stretched Clusters Deep Dive – Best Practices, Troubleshooting, & Operations Considerations

Guest speaker:
Jase McCarty - Staff Technical Marketing Architect, Storage and Availability

Registration Link:
https://vmware.zoom.us/webinar/register/WN_yyQmR-oMQiaDq-Y-CIbp2w


NEWS AND DEVELOPMENTS FROM VMWARE

Open Source Blog

Network Virtualization Blog

vSphere Blog

Cloud management Blog

Cloud Native Blog

EUC Blog

vCloud Foundation Blog

EXTERNAL NEWS FROM 3RD PARTY BLOGGERS

Virtually Ghetto

ESX Virtualization

Cormac Hogan

Scott's Weblog

vSphere-land

NTPRO.NL

Virten.net

vinfrastructure

Nukescloud

vSwitchZero

vNinja

VMExplorer

 

 

DISCLAIMER
While I do my best to publish unbiased information specifically related to VMware solutions there is always the possibility of blog posts that are unrelated, competitive or potentially conflicting that may creep into the newsletter. I apologize for this in advance if I offend anyone and do my best to ensure this does not happen. Please get in touch if you feel any inappropriate material has been published. All information in this newsletter is copyright of the original author. If you are an author and wish to no longer be used in this newsletter please get in touch.

© 2018 VMware Inc. All rights reserved.

Functional Testing Assists In App Upgradation.jpg

The vital role of Application Development or Functional Testing has been well stated. Nevertheless, in the situation of Application Modernization, the role increases, as it carries in its purview the challenges, risk, and scope for uplifting and upgrading the application. In a way, it also supports teams for authenticating the efforts.

 

Keeping this scenario in mind, we are presenting you with a list of three ways in which functional testing company helps in-app modernization.

 

1. Delivering the Anticipated Result

Functional Testing is required to discourse the apprehensions around the real application of functional needs. It is normally mentioned as black box testing, which does not require much information regarding the implementation procedure. With Functional Test sets, every situation becomes a functional test. Therefore, when a function is applied or presented within the app, the particular functional test is implemented after been unit tested. The significance of functional tests completely hinges on the objectives and priorities defined for the app. The objective is to distribute what is anticipated from the application.

 

2. Continuous Functioning and Anticipated Business Result from All the Functions

With System Testing the teams implement end-to-end functional tests across software units. This assists to guarantee that, as a whole, all the functions distribute anticipated business result. The emphasis is on the complete situation that requires critical units to fit in and distribute a specific activity. Therefore, all subsystems have to be verified primarily before they are combined with the additional subsystem. In order to circumvent any trouble in recognizing instant errors, the components are slowly integrated after being tested in separation.

 

This is very much pertinent in an Application Modernization situation, where the latest features are being entrenched, but these new mechanisms must assimilate with the current ones to deliver a unified experience.

 

3. Application Modifications Do Not Influence the Complete System

Regression Testing is essential to guarantee that the code alterations do not bring in glitches or bugs that might influence the general system. Therefore, it should include plans from the original unit that also comprises of functional as well as system tests. This would assist in explaining the present functionality that is anticipated from the application. Regression testing might not be needed for the entire system, but it might be required for specific functional areas that are complex in nature.

 

Nonetheless, the challenge comes in while modernizing legacy apps, where the development team has to deal with the hardcoded business procedure workflow and other firmly bound legacy codes.

Last option to try when you don't have any option to try from the UI or the admin UI

 

If you have tried multiple options and couldn't resolve vROps related issues and decided to go with new deployment. Before you go for new deployment try one last option below

 

  • Run the Cluster Offline command on the MASTER node in the cluster.
    • $VMWARE_PYTHON_BIN /usr/lib/vmware-vcopssuite/utilities/sliceConfiguration/bin/vcopsClusterManager.py offline-cluster Maintenance
  • Run the below slice offline command on the analytics nodes in the cluster:
    • $VMWARE_PYTHON_BIN $ALIVE_BASE/../vmware-vcopssuite/utilities/sliceConfiguration/bin/vcopsConfigureRoles.py --action bringSliceOffline --offlineReason "recovery"
    • Run the above command on all the nodes DATA, REPLICA and last MASTER in the same order.
  • Reboot all the Nodes
  • Power-on all the Nodes
  • Run the below command to bring the slice online
    • $VMWARE_PYTHON_BIN $VCOPS_BASE/../vmware-vcopssuite/utilities/sliceConfiguration/bin/vcopsConfigureRoles.py --action bringSliceOnline
    • Run the above command on all the nodes in MASTER , REPLICA and DATA node in the same order.
  • Bring cluster online, run the below command on the MASTER Node.
    • $VMWARE_PYTHON_BIN /usr/lib/vmware-vcopssuite/utilities/sliceConfiguration/bin/vcopsClusterManager.py init-cluster

    

Test Cases.jpg

 

A test case can be defined as the sets of variables and conditions utilized by the software testers to check if the system performs as per expectations. Test cases play a major role in authenticating the test coverage of a software app. It entails essential fields that offer information regarding the test case, the expected results, and the activities involved in the execution. These fields entail a different name, any requirement involved, detailed steps, input conditions, and desired outcomes for a specific app function.

Effective test cases are simple to maintain and execute. A test case management tool plays a significant role in streamlining the test case process. They make the testing procedure more effective by saving effort and time.

 

Keeping this scenario in mind we are providing the list of 10 best practices for developing test cases.

 

1. Keep It Easy to Understand and Simple

An effective test case is extremely simple and well-written for the testers to execute and understand. Organize the test cases as per related areas or specific categories of the app. Test cases can be grouped on the basis of their user story or modules like browser specific behaviors. As a result, it becomes easy to maintain and review the test document. Information provided in the test cases must clear to other developers, testers, and stakeholders.

 

2. Entail End User Perspective

Take into consideration, the end user perspective prior to drafting the test case. Put yourself in the shoes of an end user who is the main stakeholder for whom the app is created. You must the end user perspective, the functionality aspects to be covered and the requirement. This will assist in identifying test scenarios that rise in actual conditions.

 

3. Utilize Correct Naming Conventions

The test case must be named in a way that makes it easy for the stakeholders to understand and detect its objective. You should name the test cases in accordance with the functional area or the module that is included for testing.

 

4. Offer Test Case Description

An appropriate test case description will permit the users to comprehend what is being tested and how it is being tested. Provide pertinent details like any other particular information and test environment. Mention the testing tools and the testing data to be utilized to be applied for executing the tests.

 

5. Entail Preconditions and Assumptions

Testers must include all the conditions and assumptions that are valid for test cases. Give details related to the test environment, any special set up for the implementation of the test cases.

 

6. Mention the Steps That You Have Incorporated

Incorporate the actual steps included in the implementation of the test cases. Testers are not missed out on any step. Make sure that all the test case authentication steps are covered. Contain relevant screenshots or documents that can assist in the execution of the steps provided in the test design.

 

7. Provide Details of the Test Data

You should provide the test data details for test case execution, particularly in scenarios where the same data is used again. This assists in time-saving for the development of the test data for every cycle to be run. Mention the value range, for the particular fields. Testers must not try to test each and every value. The objective must be to ensure maximum coverage by selecting some from every class.

 

8. Make it Modular and Reusable

Testers must guarantee that there is zero conflict and dependency among the test cases. In a scenario, where test cases are batched or inter-dependent, you are advised to mention them clearly in the test document.

 

9. Assign Testing Priority

Testers must assign test priority to every test case based on the component or feature involved. This will guarantee that a high priority test case is executed first during the execution.

 

10. Offer Post Conditions and Desired Results

Testers are advised to include the expected outcome for each step of the test case. You can also entail relevant documents and screenshots for reference. Mention the things or post-conditions that must be confirmed after test case execution.

vSAN に RAID5 / RAID6 の 仮想ディスク(VMDK)を配置するには、ノード数の要件があります。

  • RAID5 → 4ノード以上(4つ以上のフォルト ドメイン)
  • RAID6 → 6ノード以上(6つ以上のフォルト ドメイン)

参考: RAID 5 または RAID 6 イレージャ コーディングの使用

 

下記のサイトによると、これは vSAN の RAID の実装によるもので、

RAID5 が 4つ、RAID6 が 6つのコンポーネントとして実装されているためのようです。

 

VSAN ERASURE CODING FAILURE HANDLING

https://cormachogan.com/2018/12/13/vsan-erasure-coding-failure-handling/

 

そこで今回は 6より多いノード数で、

実際に vSAN の RAID5 / RAID6 ポリシーの VM を作成してみました。

 

vSAN 環境。

下記のように、オールフラッシュディスクグループの7ノード構成にしています。

  • vSAN 6.7 U1
  • ESXi 7ノード
  • オール フラッシュ(キャッシュ層 SSD + キャパシティ層 SSD)ディスクグループ
  • ディスクグループは各ノードで 1つずつ(キャッシュ層 x1 + キャパシティ層 x2)
  • フォールト ドメインはデフォルト構成のまま

vsan-raid56-00.png

 

仮想マシン ストレージポリシーの作成。

RAID5 のストレージポリシーを作成します。

今回は「許容される障害の数」だけデフォルト値から変更しています。

vsan-raid56-01.png

 

RAID5 のポリシー名は「vSAN-RAID5-Policy」にしました。

vsan-raid56-02.png

 

RAID6のポリシーも、同様に「vSAN-RAID6-Policy」として作成しました。

vsan-raid56-04.png

 

仮想マシンの作成。(RAID5)

仮想マシンを、「vSAN-RAID5-Policy」で作成しました。

ちなみに、今回は 40GB で Thin プロビジョニングの VMDK を 1つだけ作成しています。

vsan-raid56-05.png

 

RAID5 になっている仮想ディスクの物理配置を確認すると、ノード数が4つより多くても、

コンポーネントは 4つ(4ノード)に分散されています。

vsan-raid56-06.png

 

仮想マシンの作成。(RAID6)

次に、仮想マシンを「vSAN-RAID6-Policy」で作成しました。

vsan-raid56-07.png

 

RAID6 の仮想ディスクの物理配置についても、

ノード数が4つより多くても、コンポーネントは 6つ(6ノード)に分散されています。

vsan-raid56-08.png

 

vSAN の RAID5 / RAID6 は、基本的にはノード数が多いからといって

RAID のストライプ数があわせて増加するわけではないようです。

おそらくは、可用性と性能のバランスをとった設計なのだろうと思います。

 

以上、vSAN の RAID5 / RAID6 の仮想ディスク コンポーネントを見てみる話でした。

1 2 Previous Next

Actions

Looking for a blog?

Can't find a specific blog? Try using the Blog page to browse and search blogs.