Skip navigation

Blog Posts

Total : 4,148

Blog Posts

1 2 Previous Next

VMware Photon OS 3.0 の参照 DNS サーバの設定は、

これまでの Photon OS とは様子が変わっているようです。

今回は、Photon OS 3.0 の DNS サーバ アドレスの確認と、設定変更をしてみます。

 

Photon OS 3.0 は、GitHub の URL からダウンロードできる

「OVA with virtual hardware v13 (UEFI Secure Boot)」を利用しています。

Downloading Photon OS · vmware/photon Wiki · GitHub

root@photon-machine [ ~ ]# cat /etc/photon-release

VMware Photon OS 3.0

PHOTON_BUILD_NUMBER=26156e2

 

Photon 3.0 の /etc/resolv.conf は下記のように、

nameserver に「127.0.0.53」というアドレスが設定されています。

root@photon-machine [ ~ ]# cat /etc/resolv.conf

# This file is managed by man:systemd-resolved(8). Do not edit.

#

# This is a dynamic resolv.conf file for connecting local clients to the

# internal DNS stub resolver of systemd-resolved. This file lists all

# configured search domains.

#

# Run "resolvectl status" to see details about the uplink DNS servers

# currently in use.

#

# Third party programs must not access this file directly, but only through the

# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,

# replace this symlink by a static file or a different symlink.

#

# See man:systemd-resolved.service(8) for details about the supported modes of

# operation for /etc/resolv.conf.

 

nameserver 127.0.0.53

 

このアドレスは、DNS サーバ関連のようで、UDP 53 番ポートで待ち受けているようです。

root@photon-machine [ ~ ]# ss -an | grep 127.0.0.53

udp   UNCONN  0        0                              127.0.0.53%lo:53                                               0.0.0.0:*

 

そして 53番ポートのプロセスを確認してみると、

resolv.conf のコメントとも関係ありそうな systemd-resolve というものです。

root@photon-machine [ ~ ]# tdnf install -y lsof

root@photon-machine [ ~ ]# lsof -i:53 -P -n

COMMAND   PID            USER   FD   TYPE DEVICE SIZE/OFF NODE NAME

systemd-r 205 systemd-resolve   12u  IPv4   3644      0t0  UDP 127.0.0.53:53

root@photon-machine [ ~ ]# ps -p 205

  PID TTY          TIME CMD

  205 ?        00:00:00 systemd-resolve

 

これは、systemd 229 以降に導入された名前解決マネージャーの仕組みのようです。

https://www.freedesktop.org/wiki/Software/systemd/resolved/

 

ちなみに、Photon 3.0 は systemd 239 でした。

root@photon-machine [ ~ ]# rpm -q systemd

systemd-239-10.ph3.x86_64

 

DNS サーバのアドレスは、/etc/systemd/network/*.network ファイルの

「DNS=」で設定したものが反映されます。

 

Photon OS 3.0 では、デフォルトでは DHCP 設定のファイルが配置されています。

root@photon-machine [ ~ ]# cat /etc/systemd/network/99-dhcp-en.network

[Match]

Name=e*

 

[Network]

DHCP=yes

IPv6AcceptRA=no

 

現時点では、DHCP 設定により自宅ラボの  DNS サーバ 2台が設定されています。

root@photon-machine [ ~ ]# resolvectl dns

Global:

Link 2 (eth0): 192.168.1.101 192.168.1.102

 

resolvectl では、より詳細な情報も確認できます。

(デフォルトだとページャが作用しますが、とりあえず cat で全体表示しています)

root@photon-machine [ ~ ]# resolvectl | cat

Global

       LLMNR setting: no

MulticastDNS setting: yes

  DNSOverTLS setting: no

      DNSSEC setting: no

    DNSSEC supported: no

Fallback DNS Servers: 8.8.8.8

                      8.8.4.4

                      2001:4860:4860::8888

                      2001:4860:4860::8844

          DNSSEC NTA: 10.in-addr.arpa

                      16.172.in-addr.arpa

                      168.192.in-addr.arpa

                      17.172.in-addr.arpa

                      18.172.in-addr.arpa

                      19.172.in-addr.arpa

                      20.172.in-addr.arpa

                      21.172.in-addr.arpa

                      22.172.in-addr.arpa

                      23.172.in-addr.arpa

                      24.172.in-addr.arpa

                      25.172.in-addr.arpa

                      26.172.in-addr.arpa

                      27.172.in-addr.arpa

                      28.172.in-addr.arpa

                      29.172.in-addr.arpa

                      30.172.in-addr.arpa

                      31.172.in-addr.arpa

                      corp

                      d.f.ip6.arpa

                      home

                      internal

                      intranet

                      lan

                      local

                      private

                      test

 

Link 2 (eth0)

      Current Scopes: DNS

       LLMNR setting: yes

MulticastDNS setting: no

  DNSOverTLS setting: no

      DNSSEC setting: no

    DNSSEC supported: no

  Current DNS Server: 192.168.1.101

         DNS Servers: 192.168.1.101

                      192.168.1.102

 

ためしに、DNS サーバのアドレスを変更してみます。

設定ファイルを vi エディタで編集します。

root@photon-machine [ ~ ]# vi /etc/systemd/network/99-dhcp-en.network

 

今回は、下記の赤字部分を追記します。

[Match]

Name=e*

 

[Network]

DHCP=yes

IPv6AcceptRA=no

Domains=go-lab.jp

DNS=192.168.1.1

DNS=192.168.1.2

 

ネットワークを再起動します。

root@photon-machine [ ~ ]# systemctl restart systemd-networkd

 

DNS サーバアドレスが追加登録されました。

DHCP サーバによる DNS サーバのアドレスよりも高優先度で

ファイルに追記した DNS サーバが追加されました。

root@photon-machine [ ~ ]# resolvectl dns

Global:

Link 2 (eth0): 192.168.1.1 192.168.1.2 192.168.1.101 192.168.1.102

 

resolvectl コマンドの末尾 10行だけ表示してみると、

「Domains」のドメインも追加されています。

root@photon-machine [ ~ ]# resolvectl | tail -n 10

       LLMNR setting: yes

MulticastDNS setting: no

  DNSOverTLS setting: no

      DNSSEC setting: no

    DNSSEC supported: no

         DNS Servers: 192.168.1.1

                      192.168.1.2

                      192.168.1.101

                      192.168.1.102

          DNS Domain: go-lab.jp

 

実際に名前解決が発生すると、利用されている DNS サーバ(Current DNS Server)がわかります。

root@photon-machine [ ~ ]# resolvectl | tail -n 10

MulticastDNS setting: no

  DNSOverTLS setting: no

      DNSSEC setting: no

    DNSSEC supported: no

  Current DNS Server: 192.168.1.1

         DNS Servers: 192.168.1.1

                      192.168.1.2

                      192.168.1.101

                      192.168.1.102

          DNS Domain: go-lab.jp

 

DNS サーバ のアドレスが変更されても、/etc/resolv.conf のアドレスは

127.0.0.53 のままですが、サーチドメインは追加されます。

root@photon-machine [ ~ ]# grep -v '#' /etc/resolv.conf

 

nameserver 127.0.0.53

search go-lab.jp

 

以上。Photon OS 3.0 の DNS サーバ アドレス設定の様子でした。

There are times during troubleshooting where you like to see a particular attribute in Workspace ONE Identity (VMware Identity Manager) and its not displayed in the web portal or times when you would like to update a particular attribute or delete a JIT user.

 

DISCLAIMER:  Please use the API with caution as this can potentially cause issues if not used appropriately. Please do NOT use in Production. Please use at your own risk.

 

In this blog we'll walk through a few useful API calls to help in your troubleshooting. For a complete list of API calls and documentation:

 

VMware Identity Manager API - VMware API Explorer - VMware {code}

 

Please download and install the latest version of Postman.

 

In this blog we'll go use the following API's:

  • Get Specific User Details
  • Update SCIM User
  • Delete SCIM User
  • Create SCIM User

 

Step 1: Getting your OAuth Token

 

In order do use the SCIM based API you need an OAuth token. I'm going to walk through two different ways of getting a token to use in your environment.

 

If you are going to access a particular environment quite often using postman I would suggest you go with Option 1. If its unlikely you will access a particular environment that often then you should go with Option 2.

 

Option 1: Creating an OAuth Application

  1. Log into Workspace ONE Identity Admin Console
  2. Click on the Catalog (down arrow) and select Settings
    Screen Shot 05-08-19 at 01.16 PM.PNG
  3. Click "Remote App Access"
  4. Click Create Client
    Screen Shot 05-08-19 at 01.18 PM.PNG
  5. Select "Service Access Token" from the Drop down menu
  6. Provide a Client ID ie. Postman
  7. Expand Advanced
  8. Click Generate Shared Secret (or provide one)
  9. Click Add
    Screen Shot 05-08-19 at 02.30 PM.PNG
  10. We will configure Postman in the next section.

 

Option 2: Using your browser cookies

 

  1. Make sure you have a way of accessing your browser cookies. I use a Chrome plugin called "Edit this cookie"
    Screen Shot 05-08-19 at 02.40 PM.PNG
  2. Log into your Workspace ONE Identity Admin Console
  3. Click the Cookie Icon in the chrome address bar
  4. Search for the "HZN" cookie
    Screen Shot 05-08-19 at 02.43 PM.PNG
  5. Copy the value for HZN.
  6. We will configure Postman in the next section.

 

Step 2: Configure Postman to use your OAuth Token

Depending which option you chose in the previous step, follow the instructions below to add your OAuth Token

 

Option 1: Creating an OAuth Application

  1. Open a new Tab in Postman
  2. In the authorization section, select "OAuth 2.0" as the type:
    Screen Shot 05-08-19 at 02.50 PM.PNG
  3. Click Get New Access Token
    Screen Shot 05-08-19 at 02.52 PM.PNG
  4. Provide a Token Name (ie. Workspace ONE)
  5. Under "Auth URL", enter https:[Tenant URL]/SAAS/auth/oauth2/authorize
    ie. https://dsas.vmwareidentity.com/SAAS/auth/oauth2/authorize
  6. "Under Access Token URL", enter https:[Tenant URL]/SAAS/auth/oauthtoken
    ie. https://dsas.vmwareidentity.com/SAAS/auth/oauthtoken
  7. Under Client ID, enter your Client ID from step 1.
  8. Under Secret, enter your secret from step 1.
  9. Under Scope, leave blank.
  10. Under Grant Type, select "Client Credentials"
    Screen Shot 05-08-19 at 02.58 PM.PNG
  11. Click Request Token
  12. Click on WorkspaceONE under Existing Tokens
  13. Select Use Token
    Screen Shot 05-08-19 at 03.00 PM.PNG
  14. If you click on the headers tab you will see the "Authorization" header has been added with the correct token.

 

Option 2: Using your browser cookies

 

  1. Open a new Tab in Postman
  2. Click on the Headers Section
  3. Add the Header Key "Authorization"
  4. In the Value, type "Bearer" then paste the value of the HZN cookie.
    Screen Shot 05-08-19 at 03.10 PM.PNG

 

Getting User Details

Now that you have your OAuth token, we can use this token to query Workspace ONE Identity.

 

  1. For the HTTP Method, select "GET"
  2. Enter the following for the URL: https://[TENANTURL]/SAAS/jersey/manager/api/scim/Users?filter=username%20eq%20%22MyUserID%22
  3. Replace MyUserID with a username in your environment
    ie. https://dsas.vmwareidentity.com/SAAS/jersey/manager/api/scim/Users?filter=username%20eq%20%22sdsa%22
  4. This will return a complete result set of attributes for the particular user.
    Screen Shot 05-08-19 at 03.23 PM.PNG

Updating User Details

In order to update user details via the API, you will need to collect some information from the Get User Details.

 

In my example, I'm going to update the "userPrincipalName" in Workspace ONE Access for one of my users.

  1. Perform a "Get" on the particular user and retrieve the schema information. Please note, this will be different for each tenant as the tenant name is part of the schema.
    Screen Shot 05-08-19 at 03.34 PM.PNG
  2. Copy this section to notepad.
  3. Retrieve the section which contains the attribute(s) you want to update
    Screen Shot 05-08-19 at 03.35 PM.PNG
  4. Copy the ID of the User
    Screen Shot 05-08-19 at 03.38 PM.PNG
  5. Open a new Tab in Postman
  6. Add the Authorization Header as per the previous section.
  7. For the HTTP Method, select "PATCH"
  8. For the URL, enter: https://[TENANTURL]/SAAS/jersey/manager/api/scim/Users/[ID]
    Replace the Tenant URL with your URL
    Replace the ID with the ID from the step 4 in this section.
    ie. https://dsas.vmwareidentity.com//SAAS/jersey/manager/api/scim/Users/884b7e7d-6a7b-4985-b113-56235826e8a6
  9. Select Body
  10. Enter the JSON in raw text that we'll post to Workspace ONE
  11. Select "JSON (application/json)" as the Content-Type
    Screen Shot 05-08-19 at 04.02 PM.PNG
  12. Click Send
  13. You should receive a "204 No Content" response
    Screen Shot 05-08-19 at 04.03 PM.PNG
  14. If you perform a GET User again you should see the value has changed.

 

Delete Users

If you are using JIT to onboard users into Workspace ONE Identity you've probably noticed there is no way to delete users in the web portal. They only way to delete is with the API.

  1. Perform a "Get" on the particular user and retrieve the ID
    Screen Shot 05-09-19 at 10.47 AM.PNG
  2. Open a new Tab in Postman
  3. Add the Authorization Header as per the previous section.
  4. For the HTTP Method, select "DELETE"
  5. For the URL, enter: https://[TENANTURL]/SAAS/jersey/manager/api/scim/Users/[ID]
    Replace the Tenant URL with your URL
    Replace the ID with the ID from the step 4 in this section.
    ie. https://dsas.vmwareidentity.com//SAAS/jersey/manager/api/scim/Users/f6f89782-0a2a-4cc8-84a8-057f1da6ecde
  6. Click Send
    Screen Shot 05-09-19 at 10.50 AM 001.PNG
  7. You should receive a "204 No Content" response
    Screen Shot 05-08-19 at 04.03 PM.PNG
  8. If you perform a GET User again you should see no results found.
    Screen Shot 05-09-19 at 10.53 AM.PNG

 

Create Users

Creating Users in Workspace ONE Identity require a lot more steps. I reluctantly decided to document the steps as this should really be done by the out of the box connectors. The process is slightly different between System Directory, Local Directory, and Other.  The "Other" directory is created automatically when setting up the UEM/WS1 Integration.

 

Creating Users in the System Directory

  1. Open a new Tab in Postman
  2. Add the Authorization Header as per the previous section.
  3. For the HTTP Method, select "POST"
  4. For the URL, enter: https://[TENANTURL]/SAAS/jersey/manager/api/scim/Users
    Replace the Tenant URL with your URL
    Replace the ID with the ID from the step 4 in this section.
    ie. https://dsas.vmwareidentity.com//SAAS/jersey/manager/api/scim/Users
  5. Set the Content-Type to "application/json"
  6. Use the following as a sample:
{
  "schemas": [
    "urn:scim:schemas:core:1.0",
    "urn:scim:schemas:extension:workspace:tenant:sva:1.0",
    "urn:scim:schemas:extension:workspace:1.0",
    "urn:scim:schemas:extension:enterprise:1.0"
  ],
  "userName": "testing4@mydomain.com",
  "name": {
    "givenName": "first4",
    "familyName": "last4"
  },
  "emails": [
    {
      "value": "testing4@mydomain.com"
    }
  ],
  "password": "Password$!"
}

 

Creating Users in a Local Directory

 

  1. Open a new Tab in Postman
  2. Add the Authorization Header as per the previous section.
  3. For the HTTP Method, select "POST"
  4. For the URL, enter: https://[TENANTURL]/SAAS/jersey/manager/api/scim/Users
    Replace the Tenant URL with your URL
    Replace the ID with the ID from the step 4 in this section.
    ie. https://dsas.vmwareidentity.com//SAAS/jersey/manager/api/scim/Users
  5. Set the Content-Type to "application/json"
  6. Use the following as a sample:
{
  "schemas": [
    "urn:scim:schemas:core:1.0",
    "urn:scim:schemas:extension:workspace:tenant:sva:1.0",
    "urn:scim:schemas:extension:workspace:1.0",
    "urn:scim:schemas:extension:enterprise:1.0"
  ],
  "userName": "testing5@mydomain.com",
  "name": {
    "givenName": "first5",
    "familyName": "last5"
  },
  "emails": [
    {
      "value": "testing5@mydomain.com"
    }
  ],
  "password": "Password$!",
   "urn:scim:schemas:extension:workspace:1.0": {
        "internalUserType": "LOCAL",
        "userStatus": "1",
        "domain": "mydomain.com"
      }


}

 

Creating Users in an Other Directory

 

The steps to create a user in an other directory is almost identity to the local directory except that we need to know the "userStoreUuid" of the directory and we need an ExternalID. The External ID should be a valid ObjectGUID. If you don't have a valid ObjectGUID you will have problems when enrolling devices from the Workspace ONE Intelligent Hub application. Ensure that you generate a proper guid. See Online UUID Generator Tool as a example of a proper guid.

 

  1. Open a new Tab in Postman
  2. Add the Authorization Header as per the previous section.
  3. For the HTTP Method, select "GET"
  4. For the URL, enter: https://[TENANT URL]/SAAS/jersey/manager/api/connectormanagement/directoryconfigs?includeJitDirectories=true
    Replace the Tenant URL with your URL
    Replace the ID with the ID from the step 4 in this section.
    ie. https://dsas.vmwareidentity.com/SAAS/jersey/manager/api/connectormanagement/directoryconfigs?includeJitDirectories=true
  5. Click Send
  6. In the response, search for your "Other Directory" and copy the userStoreID
  7. Screen Shot 05-09-19 at 05.25 PM.PNG
  8. Open a new Tab in Postman
  9. Add the Authorization Header as per the previous section.
  10. For the HTTP Method, select "POST"
  11. For the URL, enter: https://[TENANTURL]/SAAS/jersey/manager/api/scim/Users
    Replace the Tenant URL with your URL
    Replace the ID with the ID from the step 4 in this section.
    ie. https://dsas.vmwareidentity.com//SAAS/jersey/manager/api/scim/Users
  12. Set the Content-Type to "application/json"
  13. Use the following as a sample and don't forget to create a Unique External ID.
{
  "schemas": [
    "urn:scim:schemas:core:1.0",
    "urn:scim:schemas:extension:workspace:tenant:sva:1.0",
    "urn:scim:schemas:extension:workspace:1.0",
    "urn:scim:schemas:extension:enterprise:1.0"
  ],
  "externalId": "c58085e6-c97a-4df3-8e4a-e376913fab17",
  "userName": "testing6@mydomain.com",
  "name": {
    "givenName": "test6",
    "familyName": "last6"
  },
  "emails": [
    {
      "value": "testing6@mydomain.com"
    }
  ],
  "password": "Password$!",
   "urn:scim:schemas:extension:workspace:1.0": {
        "internalUserType": "PROVISIONED",
        "userStatus": "1",
        "domain": "1dsavm.com",
        "userStoreUuid": "987dca12-22e3-4ec6-8958-110cca481c3d",
        "externalUserDisabled": false,
        "userPrincipalName": "testing6@mydomain.com"
      }
}

 

Creating an Other Directory

When you configure UEM to integrate with Identity Manager an "Other" Directory should be automatically created. If in the case it is not created, you can create one via the API as well.

  1. Open a new Tab in Postman
  2. Add the Authorization Header as per the previous section.
  3. For the HTTP Method, select "POST"
  4. For the URL, enter: https://[TENANTURL]/SAAS/jersey/manager/api/connectormanagement/directoryconfigs
    Replace the Tenant URL with your URL
    Replace the ID with the ID from the step 4 in this section.
    ie. https://dsas.vmwareidentity.com/SAAS/jersey/manager/api/connectormanagement/directoryconfigs
  5. Set the Content-Type to "application/vnd.vmware.horizon.manager.connector.management.directory.other+json"
  6. Use the following as a sample
{
"type":"OTHER_DIRECTORY",
"domains":["SteveTestDomain2"],
"name":"SteveTest2"
}

 

Troubleshooting

It would be impossible to discuss every combination of errors that can be returned using the API. Here are a few common ones:

 

  1. If you receive the error "User is not authorized to perform the task.".
    This error typically means that your Oauth Token has expired. Regenerate your OAuth Token.  If you have used the browser cookies method to get your token, ensure that your HZN cookie is from the administrative interface.
  2. When doing an update user, you receive the error ""???UNSUPPORTED_MEDIA_TYPE???""
    This error means that you are sending a blank or incorrect Content-Type. Check to make sure the content-type is set to "application/json"

The release of Workspace ONE 19.03 brought in a very seamless integration of Okta Applications.

 

If you have integrated the two solutions previously you will recall the number of steps required to create and entitle new applications in Workspace from Okta. This integrations you to create and entitle applications in Okta and have them seamless appear in Workspace ONE along with your Native and Virtual Applications.

 

Lets walk through the steps to integrate the two solutions.

 

In this blog we are going to assume that you have existing connectors for Workspace ONE UEM and Workspace ONE Identity. We are also assuming you have your Workspace ONE Identity access policies already configured for Mobile SSO, Certificate or Password (Cloud Deployment).

 

Part 2: Unified Digital Workspace

The objective of this section to automatically sync all SAML enabled applications from Okta to Workspace ONE. This configuration will eliminate the manual steps required to both create and entitle Okta applications in Workspace ONE.

 

Step 1: Create an Okta API Key

  1. Log into the Okta Admin Console
  2. Go to Security -> API
  3. Click on Tokens
    Screen Shot 05-08-19 at 10.32 AM.PNG
  4. Click Create Token
  5. Provide a name for the token
    Screen Shot 05-08-19 at 10.34 AM.PNG
  6. Click Create Token
  7. Click the Copy Token button
    Screen Shot 05-08-19 at 10.35 AM.PNG
    Note:  Its very important you copy and save this token somewhere. Once you close this window you will not be able to retrieve this value again. You will have to delete the token and create a new one.

 

Step 2: Configure Workspace ONE with Okta API Information

  1. Log into the Workspace ONE Admin Console
  2. Click on Identity & Access Management -> Setup -> Okta
    Screen Shot 05-08-19 at 10.41 AM.PNG
    Note: If you are using Chrome, please be aware of Chrome auto filling any fields.

  3. Enter your Okta Cloud URL.
    Note: Do NOT use the Admin URL!!
    Screen Shot 05-08-19 at 12.19 PM.PNG

  4. Paste your Okta API Token
  5. Select the username search parameter that will match in Okta.
  6. Click Save

 

NOTE: Okta Applications will NOT appear in the Workspace ONE Admin Console

 

Step 3: Testing

  1. Log into Workspace ONE with a directory account.
  2. You should now see all your Okta Applications along with any other applications configured in Workspace ONE.

Screen Shot 05-08-19 at 12.38 PM.PNG

                                                                    down_arrow_clip_art_7569.jpg

Screen Shot 05-08-19 at 12.38 PM 001.PNG

The release of Workspace ONE 19.03 brought in a very seamless integration of Okta Applications.

 

If you have integrated the two solutions previously you will recall the number of steps required to create and entitle new applications in Workspace from Okta. This integrations you to create and entitle applications in Okta and have them seamless appear in Workspace ONE along with your Native and Virtual Applications.

 

Lets walk through the steps to integrate the two solutions.

 

In this blog we are going to assume that you have existing connectors for Workspace ONE UEM and Workspace ONE Identity. We are also assuming you have your Workspace ONE Identity access policies already configured for Mobile SSO, Certificate or Password (Cloud Deployment).

 

 

Part 1: Core Setup and Configuration

The objective of this section to configure Okta to delegate authentication to Workspace ONE Identity where Mobile SSO and Device Compliance are configured.

 

Step 1:  Exporting the Workspace ONE IdP Metadata

  1. Log into Workspace ONE Identity Console -> Catalog -> Settings
  2. Click on "Identity Provider (IdP) metadata" and save the file locally.
    Screen Shot 04-26-19 at 03.16 PM.PNG
  3. Scroll down to the Signing Certificate Section and Download.
    Screen Shot 04-26-19 at 03.30 PM.PNG

Step 2: Add Identity Provider to Okta

  1. Log into your Okta Admin Console
  2. Click on Security -> Identity Providers -> SAML 2.0 Identity Provider
  3. Click on Add Identity Provider
  4. Provider a name: ie. Workspace ONE
  5. For IdP Username, select "idpuser.subjectNameId"
  6. For "If no match is found", select "Redirect to Okta sign-in page"
  7. For your "IdP Issuer URI", retrieve and paste this value from your SAML Metadata you downloaded in step one.
  8. For your "IdP Single Sign-On URL",retrieve and paste this value from your SAML Metadata you downloaded in step one.
  9. For the "IdP Signature Certificate, upload the signing certificate you downloaded in Step 1.
  10. Expand the newly created Identity Provider and download the metadata
    Screen Shot 04-26-19 at 03.34 PM.PNG

Step 3: Create Okta Application Source in Workspace ONE Identity

  1. In Workspace ONE Identity Console, click on Catalog -> Settings
  2. Click on Application Sources
  3. Click on Okta
    Screen Shot 04-26-19 at 03.37 PM.PNG
  4. On the Okta Application Source page, click next
    Screen Shot 04-26-19 at 03.38 PM.PNG
  5. Select "URL/XML" and paste the contents of the Okta metdata we downloaded in the previous step.
    Screen Shot 04-26-19 at 03.40 PM.PNG
    If you chose manual, the mappings should be follow as below:
    Screen Shot 05-09-19 at 12.36 PM.PNG

  6. On the Access Policies page, click next (see note below):
    Screen Shot 04-26-19 at 03.41 PM.PNG

    Note: For the purpose of this blog we are using the "default_access_policy_set". However, it is recommended that you create an access policy specific for the Okta Application Source.  The reason for this recommendation is that you'll likely not want any fallback mechanisms when performing Mobile SSO (so we can present a error message to enroll your device). However, when you enroll your device into Workspace ONE UEM you probably want a fallback mechanism.

  7. Click Save on the summary page.

 

Step 4: Create Okta Routing Rules

  1. Log into the Okta console.
  2. Go to Security -> Identity Providers
  3. Click on Routing Rules
    Screen Shot 05-02-19 at 11.18 AM.PNG
  4. Click Add Routing Rule
  5. Provide a Rule Name
  6. Select the platforms that you want to using Workspace ONE Identity (ie. IOS/Android)
  7. Select the applications that you want to use Workspace ONE Identity
  8. Select the Identity Provider we created previously
    Screen Shot 05-02-19 at 11.21 AM.PNG
  9. Click Create Rule

 

Step 5: Testing

  1. Access your Salesforce development tenant
  2. Select to Authenticate with Okta
  3. Based on your Okta Rules, you should be redirected to Workspace ONE Identity.
  4. Authenticate within WS1
  5. You should return back to Okta and be redirected and successfully authenticated into SalesForce

Troubleshooting Tips

 

  1. Ensure your user is entitled to Salesforce within Okta.
  2. Verify the IdP Issuer in Okta is correct:

    https://aw-sdsatest.vidmpreview.com/SAAS/API/1.0/GET/metadata/idp.xml
  3. Verify the username values we are sending from Workspace ONE to Okta will match:
    Screen Shot 05-03-19 at 10.07 AM.PNG  TO Screen Shot 05-03-19 at 10.09 AM.PNG

 

Many promising start-ups fail abruptly due to poor quality applications. Software development companies face growing challenges in order to meet tough deadlines. Moreover, they have to maintain product quality as well. In the past, companies used to take months to deliver applications but with the advent of current technological advances, release times are shorter than ever before. IT companies invest time and money into setting up quality assurance teams. Whether it is a start-up or an organization, hiring an independent software testing company is the right choice to make.

 

Setting up a QA department is not a viable option for most companies. Let’s have a look at the top five reasons why organizations should invest in an independent service provider:

 

1. The Testing Skill Set

Let’s talk about first things first. As an entrepreneur or business executive, the first question that comes to mind is when to avail such services? Well, quality assurance is not an easy thing to do and neither do all IT companies have the skills and tools to perform these tests efficiently. They lack resources, time, and expertise as well. Thus, when new in business, a startup shouldn’t mind hiring software testing services.

 

2. The Effects of Business Processes on Quality

Due to product release deadlines, developers often fail to focus on other projects. Too often the business’ own processes are time taking that affect software quality adversely. But if companies hire an independent QA testing company, their services can fill this gap between quality and timely delivery of the project.

 

3. The Lack of Expertise

Market leaders in the IT industry leverage their own software testing mechanisms to achieve fast delivery of products. However, this is not possible for small and medium-sized companies. QA testers working for such businesses do not have their hands-on extensive testing tools and techniques which limits their expertise. So, looking for an external source for software testing is the only choice they are left with. Moreover, expert software testing companies use a broad range of tools to enhance software quality.

 

4. The Cut Down on Costs

An IT business should also choose to outsource testing to an independent company if the cost of testing is too high in their region. Outsourcing is a cost-effective solution, that doesn’t end up increasing the overall cost of the product.

 

5. The Guarantee from Quality Assurance

It is obvious that hiring an independent company means that testing services come with a guarantee. On the contrary, even if a business sets up their own QA teams, there is no guarantee that such an initiative will work. So, businesses consider hiring services of an independent testing company for better results.

 

With the growing digital world and number of devices, testing has become a vital part of the software development process. Businesses today must invest extensively and leverage testing efforts to earn profits and maintain their reputation in the industry.

A few days ago I was doing some testing with Hyper-V. As I can easily create a Windows Server using vSphere (6.5 in my case), I decided create my Hyper-V host on a VM.

In order to get this to work I created a VM running Windows Server 2019 and then I needed to do some customization to it.

 

VM CPU Settings

 

When creating the VM, change the CPU/MMU Virtualization settings to Hardware CPU and MMU.

 

Screenshot 2019-05-04 at 13.47.39.png

 

After the VM is created, make sure it's Powered Off, and navigate to its folder under the Storage menu on vSphere.

Download the <VM Name>.vmx file to your PC and open it with a Text Editor.

 

In the end of the file, add those lines:

 

hypervisor.cpuid.v0 = "FALSE"

vhv.enable = "TRUE"

 

Screenshot 2019-05-04 at 13.57.09.png

 

Save the VMX file and upload it back to the VMs folder. In my case, instead of overwriting the file in the datastore, I renamed it to <VM Name>.vmx.old.

 

Screenshot 2019-05-04 at 12.58.33.png

 

 

Installing Hyper-V

 

Power on the VM, install Windows and VMware Tools if you haven't done so, and then go to Server Manager.

Click Add roles and features and follow the wizard to install Hyper-V.

 

Screenshot 2019-05-04 at 14.22.40.png

 

Restart the Server as requested, then launch Hyper-V Manager.

 

Screenshot 2019-05-04 at 14.31.01.png

 

As a quick test, create a new VM with the default values (click New > Virtual Machine and then follow the wizard). From my testing, you would get an error message when trying to Power On this VM if something is wrong with the Hyper-V VM.

 

 

Networking

 

After creating my test VM, I noticed that I could not ping anywhere in the network apart from itself and the Hyper-V Host. I checked the Virtual Switch Manager on Hyper-V and also the Networking settings on the VM, and all seemed to be configured as expected. The way I got it to work was by going to the ESXi Console and enabling Promiscuous mode and Forged transmits in the vSwitch that was connected to the Hyper-V Server VM.

 

Screenshot 2019-05-04 at 00.40.55.png

 

Screenshot 2019-05-04 at 00.41.10.png

 

 

 

 

--

 

This configuration was done on my testing environment and I cannot guarantee this is fit for production environments.

 

The postings on this site are my own and do not represent VMware’s positions, strategies or opinions.

VMware Photon OS 3.0 で、簡易的な DNS サーバを構築してみます。

今回は、Photon OS の RPM リポジトリに登録されている dnsmasq を利用します。

 

Photon OS の準備。

VMware から提供されている、Photon OS の OVA をデプロイします。

今回は、「OVA with virtual hardware v13 (UEFI Secure Boot)」を利用しました。

Downloading Photon OS · vmware/photon Wiki · GitHub

 

OVA ファイルをデプロイ→パワーオンします。

そして、root / changeme で初期パスワードを変更してログインします。

root@photon-machine [ ~ ]# cat /etc/photon-release

VMware Photon OS 3.0

PHOTON_BUILD_NUMBER=26156e2

 

ホスト名を、わかりやすいもの(今回は lab-dns-01)に変更しておきます。

ログインしなおすと、bash のプロンプトにもホスト名が反映されます。

root@photon-machine [ ~ ]# hostnamectl set-hostname lab-dns-01

 

ネットワーク設定は、DHCP を利用しています。

 

DNS サーバの構築。(dnsmasq)

まず、dnsmasq をインストールします。

root@lab-dns-01 [ ~ ]# tdnf install -y dnsmasq

root@lab-dns-01 [ ~ ]# rpm -q dnsmasq

dnsmasq-2.79-2.ph3.x86_64

 

dnsmasq では、hosts ファイルのエントリを DNS レコードとして利用できます。

/etc/hosts ファイルに、DNS レコードの情報を記入します。

root@lab-dns-01 [ ~ ]# echo '192.168.1.20 base-esxi-01.go-lab.jp' >> /etc/hosts

root@lab-dns-01 [ ~ ]# echo '192.168.1.30 lab-vcsa-01.go-lab.jp' >> /etc/hosts

root@lab-dns-01 [ ~ ]# tail -n 2 /etc/hosts

192.168.1.20 base-esxi-01.go-lab.jp

192.168.1.30 lab-vcsa-01.go-lab.jp

 

dnsmasq を起動します。

root@lab-dns-01 [ ~ ]# systemctl start dnsmasq

root@lab-dns-01 [ ~ ]# systemctl is-active dnsmasq

active

root@lab-dns-01 [ ~ ]# systemctl enable dnsmasq

Created symlink /etc/systemd/system/multi-user.target.wants/dnsmasq.service → /lib/systemd/system/dnsmasq.service.

 

hosts ファイルを編集した場合は、dnsmasq サービスを再起動しておきます。

root@lab-dns-01 [ ~ ]# systemctl restart dnsmasq

 

iptables で、DNS のポートを開放しておきます。

iptables-save コマンドを利用するかわりに、/etc/systemd/scripts/ip4save ファイルへの

「-A INPUT -p udp -m udp --dport 53 -j ACCEPT」直接追記でも、同様に iptables の設定を永続化できます。

root@lab-dns-01 [ ~ ]# iptables -A INPUT -p udp -m udp --dport 53 -j ACCEPT

root@lab-dns-01 [ ~ ]# iptables-save > /etc/systemd/scripts/ip4save

root@lab-dns-01 [ ~ ]# systemctl restart iptables

 

名前解決の確認。

別の Photon OS 3.0 に、bindutils をインストールします。

root@photon-machine [ ~ ]# tdnf install -y bindutils

 

bindutils には、nslookup や dig コマンドが含まれます。

root@photon-machine [ ~ ]# rpm -ql bindutils

/etc/named.conf

/usr/bin/dig

/usr/bin/host

/usr/bin/nslookup

/usr/lib/tmpfiles.d/named.conf

/usr/share/man/man1/dig.1.gz

/usr/share/man/man1/host.1.gz

/usr/share/man/man1/nslookup.1.gz

 

登録したレコードの名前解決ができることを確認します。

ここでは、「lab-vcsa-01.go-lab.jp」と「192.168.1.30」を正引き / 逆引きで確認してみます。

「192.168.1.15」は、dnsmasq をインストールした DNS サーバのアドレスです。

root@photon-machine [ ~ ]# nslookup lab-vcsa-01.go-lab.jp 192.168.1.15

Server:         192.168.1.15

Address:        192.168.1.15#53

 

Name:   lab-vcsa-01.go-lab.jp

Address: 192.168.1.30

 

root@photon-machine [ ~ ]# nslookup 192.168.1.30 192.168.1.15

30.1.168.192.in-addr.arpa       name = lab-vcsa-01.go-lab.jp.

 

 

これで、ラボなどで利用する DNS サーバが用意できます。

 

以上、Photon OS 3.0 を DNS サーバにしてみる話でした。

自宅ラボの vSAN のキャッシュ ディスク容量について考える機会があり、

ためしに、vROps(vRealize Operations Manager)で、vSAN を見てみました。

今回は、vROps 7.5 で、vSAN 6.7 U1 を表示しています。

 

vROps では、vSAN アダプタを設定ずみです。

vrops-vsan-cache-size-00.png

 

vROps では vSAN のモニタリングも可能で、

よく話題になる、ディスク グループの書き込みバッファの空き容量(%)が確認できたりします。

vrops-vsan-cache-size-01.png

 

それでは、キャッシュ ディスクの推奨サイズを確認してみます。

「環境」→「すべてのオブジェクト」を開きます。

vrops-vsan-cache-size-02.png

 

「すべてのオブジェクト」→「vSAN アダプタ」→「キャッシュ ディスク」配下の

キャッシュ ディスクを選択します。

ESXi ホストに 128GB のキャッシュ用SSDが搭載されていることがわかります。

vrops-vsan-cache-size-03.png

 

キャッシュ ディスクを選択した状態で、

「すべてのメトリック」→「キャパシティ分析が生成されました」→「ディスク容量」

を開いて、「推奨サイズ(GB)」をダブルクリックすると、推奨サイズのグラフが表示されます。

この環境だと 59.62GB が推奨のようなので、キャッシュ ディスクの容量は十分なようです。

vrops-vsan-cache-size-04.png

 

ちなみにキャッシュ ディスクの情報は、下記のように「vSAN およびストレージ デバイス」

からでも表示できます。しかしキャッシュ ディスクだけを見る場合は、今回のように

「すべてのオブジェクト」からのツリーのほうが確認しやすいかなと思います。

vrops-vsan-cache-size-05.png

 

ちなみに、ドキュメントにも一応 vSAN のメトリックが記載されています。

(ただ、実機と一致はしていないような気も・・・)

vSAN のメトリック

 

まだ vROps をデプロイ&情報収集を開始してまもない状態の情報なので、

令和になったらまた見てみようと思います。

 

以上、vROps で vSAN キャッシュ ディスクの推奨サイズを見てみる話でした。

The AirWatch Provisioning App within Workspace ONE is still relatively new and although it has it quirks, it can still be useful in certain use cases.

 

So what is the AirWatch Provisioning App used for?

 

The app is designed for the use cases where there is no on premise ldap server that can be used with the Workspace ONE UEM Cloud Connector to synchronize users.  This app can be used when users are created in Workspace ONE Identity via SCIM or JIT. Workspace ONE Identity will then create the users in Workspace ONE UEM.

 

Lets first discuss some important information about using the AirWatch Provisioning App in Workspace ONE:

 

  • Currently, Workspace ONE will only provision at the top level (Customer) Organization Group (OG) in Workspace UEM.
  • An LDAP Server can NOT be configured at the top level OG in Workspace ONE UEM (unless the users exist in the directory that will be created - but if this is the case, you shouldn't be using the provisioning adapter)
  • Workspace ONE Identity needs to be configured as a SAML Provider at the top level OG.
  • If you are using JIT to create users in Workspace ONE Identity, you MUST send a valid GUID to Workspace ONE has part of the SAML attributes. This is required if you plan on using the Workspace ONE Hub native application to enroll your device. This GUID will be mapped to the External ID and provisioned to Workspace ONE UEM.
  • If you are using JIT to create users in Workspace ONE Identity, you need to use a web browser to log into Workspace ONE initially before using the Workspace ONE Hub native app. This limitation is because the user needs to exist in UEM at the time of enrollment.

 

Step 1: Export your Workspace ONE IDP Metadata

  1. Log into Workspace ONE Identity and go to Catalog -> Settings
  2. Click on SAML Metadata
  3. Download your "Identity Provider (IdP) metadata"
    Screen Shot 04-25-19 at 01.13 PM.PNG

 

Step 2: Configure UEM to use SAML Authentication

  1. Log into Workspace ONE UEM
  2. Go to Group & Settings -> All Settings -> System -> Enterprise Integration -> Directory Services
  3. Ensure Directory Type is set to "None"
  4. Enable "Use SAML for Authentication"
  5. Under Enable SAML Authentication for*, check Self-Service Portal and Enrollment.
  6. Enable "Use New SAML Authentication Endpoint"
    Screen Shot 04-25-19 at 01.19 PM.PNG
    Note: This step might be a bit confusing as to why we have to configure UEM in this manner. It was confusing to me at first.  The provisioning adapter in Workspace ONE Identity will leverage the REST API to create accounts in UEM. To create user accounts in UEM (of Directory Type), it requires that either a Directory is configured or SAML is enabled. As mentioned earlier, we can not enable a directory so we essentially have to configure SAML. 


  7. In the SAML 2.0 section, click upload to Import Identity Provider Settings
  8. Select the metadata you downloaded in Step 1.
  9. Scroll down and click save.

 

Step 3: Add AirWatch Provision App in Workspace ONE Identity

  1. In Workspace ONE Identity, go to Catalog-> New
  2. Browse from the Catalog and select "AirWatch Provisioning"
    Screen Shot 04-23-19 at 02.47 PM 002.PNG
  3. Click Next
  4. Edit the Single Sign-On URL and Recipient URL with your UEM server
    Screen Shot 04-25-19 at 02.13 PM.PNG
  5. Keep the "default_access_policy_set" and Click Next
  6. Click Save
    Screen Shot 04-23-19 at 02.49 PM 001.PNG
  7. Select the AirWatch Provisioning App and Click Edit
  8. Click Next
  9. On the Configuration Tab, enable "Setup Provisioning"
    Screen Shot 04-25-19 at 02.13 PM 001.PNG
  10. Click Next
  11. Enter your AirWatch Device Services URL
  12. Enter your Admin Username
  13. Enter your Admin Password
    Note: Whenever you edit this application be very careful of Chrome's password auto-fill. It will update the password if you have one saved in chrome. After you hit test connection it will revert back to your saved password in Chrome.
  14. Enter your AirWatch API Key
    Note: If you don't have an API Key, in UEM, go to Groups & Settings -> All Settings -> System -> Advanced -> API -> REST API
    Click Override -> Add
    Provide a Service Name with the account type of Admin.  Copy the API Key.
  15. Enter your top level OG Group ID
  16. Click Test Connection and validate connectivity.
  17. Click Enable Provisioning
    Screen Shot 04-25-19 at 01.39 PM.PNG
  18. Verify the mapping are correct. If you are using JIT, make sure all these attributes have come over in the SAML assertion.
    Screen Shot 04-23-19 at 02.53 PM 001.PNG
  19. Under Group Provisioning, add any groups you want to provision to UEM.
    Screen Shot 04-23-19 at 02.53 PM 004.PNG
  20. Click Next
  21. Click Save

 

Note: If you get an error when saving, please see the note earlier about chrome's auto password fill.

 

Step 4: Entitle Users to the AirWatch Provisioning App

You have the option of entitling users individually or using a group. If you are using JIT you might want to consider using a dynamic group.

 

  1. Click the Assign button on the AirWatch Provisioning App
  2. Search for the user and/or group
  3. Under "Deployment Type" you MUST Select Automatic. If you leave the default "User Activated" it will never get provisioned to the user.

Screen Shot 04-23-19 at 02.55 PM 001.PNG

 

Step 5: Create a Dynamic Group (Optional)

If you are using JIT to create users into Workspace ONE, it easier to create a dynamic group and assign that group to the provisioning adapter.

  1. Click on "Users & Groups"
  2. Click on Groups
  3. Click Add Group
  4. Provide a group name and Click Next
  5. Do not select any users and Click Next
  6. Under Group Rules, you can either choose based on the JIT Directory that was created or the domain you chose for the JIT Users
  7. Click Next
  8. Click Next to exclude users
  9. Click Create Group

 

Troubleshooting

  1. If you receive the error "Error not provisioned" in the assignment screen and you hover over the error message and see "Failed to validate attributes while trying to provision user" this means that the values for the attributes you used in Attribute Mappings of the provisioning adapter configuration are either null or missing. Please make sure you create the user in Workspace ONE Identity with all the necessary attributes to create the account in Workspace ONE UEM. This includes the External ID. Please see the note at the beginning of the blog regarding the External ID
    Screen Shot 05-01-19 at 09.16 AM.PNG
  2. While trying to enroll your device with the HUB application application you receive a generic error like "An Error has occurred". See the note about External ID.
  3. When trying to provision the Mobile SSO profile you receive an error that the PrincipalName contains an invalid value.
    Screen Shot 05-01-19 at 09.04 AM.PNG
    This means that you have probably created the Workspace ONE UEM account with an email as the Username. When the Mobile SSO certificate payload was created, it uses the username as the principal name on the certificate. Unfortunately IOS does not support the "@" character in the principle name. You have two choices to resolve this issue:
    a) In the AIrWatch Provisioning Adapter mappings, use another attribute to represent the username that does not contain the @ sign. You might need to adjust the values being imported into Workspace ONE identity (whether by JIT or via the connector).
    b) Use a lookup in Workspace ONE UEM to parse the prefix of the email address and use that in the certificate payload:
    Group & Settings -> All Settings -> Devices & Users -> General -> Lookup Fields
    Add Custom Field
    Create a Name such as EmailNickName and use a regex such as ".+?(?=@)"
    Screen Shot 05-01-19 at 09.14 AM.PNG
    You can then use "EmailNickName" in your Certificate Payload
    Screen Shot 05-01-19 at 09.12 AM.PNG









bug tracking software.jpg

Henry Ford once said, “Don’t find fault, find a remedy”. This fits perfectly in-line with software development as well. Testing is not only a component of software development, but it is an important process that defines how products function. Each software development project encounters issues or problems, and quality assurance experts find solutions for these problems. No matter how hard the development teams work on quality assurance, it is true that bugs and errors are common in any software. However, experts use bug tracking software to track down errors, resolve them, and provide solutions to avoid them from reoccurring.

 

A bug tracking software helps in identifying, recording, reporting, and managing bugs that occur in an application. It is designed to ensure quality and provide bug tracking tools to assist the development teams. Errors and bugs commonly appear in any software, not that anyone should be blamed for that. But it is true that there is no error-free software and that bugs cost huge losses to businesses. However, companies work diligently to earn and maintain their reputation for quality software.

 

Experts say that as they keeping adding features to an application, their functions become more complex. Testers require time to identify and resolve problems that have a direct impact on product quality. Following are a few challenges that testers encounter when using bug tracking software:

 

Bug-logging Process

A bug tracking software should be able to describe a bug properly. It should allow developers to get a clear understanding of the bug. But if an error is not reported or required fields are incomplete, it has a negative effect on the software development process.

 

Bug Tracking Template

Development team members work together on a similar bug tracking template as using different platforms can cause discrepancies. Thus, to avoid this confusion, all developers use the same templates so that reporting is simplified.

 

Priority Levels

Testing and development teams assign priority levels to each bug that appears in an application. It allows working on high priority issues first, followed by attending to less important bugs. Issue tracking software works efficiently only when teams prioritize these tasks efficiently.

 

Bug Tracking Tools

Defect management system introduces bug tracking tools that allow testers to perform their work efficiently. Testers require bug tracking tools for their projects to release quality software.

  • Companies invest in bug tracking software to analyze repetitive bugs and to channelize them to release quality software. Bug tracking is not just about tracking defects, it is also a proactive approach to ensure that companies meet software requirements and achieve customer satisfaction.

In this blog post, we will walk through the steps to configure IOS Mobile SSO.

 

I will be assuming that your Workspace ONE UEM and Workspace ONE Identity Manager environments have not been previously integrated.

 

This blog will assume that you already have an Enterprise Cloud Connector installed and syncing with Workspace ONE UEM.

 

In this blog, we'll cover:

  1. Configure Workspace ONE Identity in the UEM Console
  2. Enable Active Directory Basic
  3. Enable Mobile SSO
  4. Basic Troubleshooting

 

Validation of Pre-requisites

 

  1. Log into Workspace ONE UEM -> Global Settings -> All Settings -> System -> Enterprise Integration -> Cloud Connector
  2. Ensure AirWatch Cloud Connector is enabled
  3. Perform a Test Connection. Make sure the connection is active
    Screen Shot 04-22-19 at 01.33 PM.PNG
  4. Click on Directory Services from the left menu
  5. Ensure your directory has been configured and you can perform a successful test connection
    Screen Shot 04-22-19 at 01.39 PM.PNG
  6. Close from Settings and go to accounts on the main left in Workspace ONE UEM.
  7. Make sure you have users being synchronized into Workspace ONE UEM
    Screen Shot 04-22-19 at 01.42 PM.PNG

 

Step 1: Configure Workspace ONE Identity in the UEM Console

Although this step is not absolutely required to get Mobile SSO working, I highly recommend you configure this as its required for Device Compliance, Unified Catalog and UEM Password Authentication.

In previous versions of Workspace ONE UEM, there was a lot of manual configuration required to enable Workspace ONE Identity.  Using the wizard in Workspace ONE UEM we can automate a lot of these tasks.

 

Click on Getting Started

  1. Under Workspace ONE -> Begin Setup
    Screen Shot 04-22-19 at 01.56 PM.PNG
  2. Under Identity and Access Management -> Click Configure for "Connect to VMware Identity Manager"
    Screen Shot 04-22-19 at 01.58 PM.PNG
  3. Click Continue
    Screen Shot 04-22-19 at 02.01 PM.PNG
  4. Enter your Tenant URL, User name, and Password
    Screen Shot 04-22-19 at 02.03 PM 001.PNG
  5. Click Save
  6. If you check your Workspace ONE Identity tenant, you will see that AirWatch configuration as been completed: Identity & Access Management -> Setup -> AirWatch

 

Step 2: Enable Active Directory Basic

VMware recommends you download and install the VMware Identity Manager connector to synchronize users from your Active Directory to Workspace ONE Identity. However, for the purpose of this blog we are going to leverage to built-in capabilities of Workspace UEM to provision users directly into Workspace ONE Identity.

 

  1. In Workspace ONE UEM, Groups & Settings -> All Settings -> System -> Enterprise Integration -> VMware Identity Manager -> Configuration
  2. You will see under the server settings that "Active Directory Basic" is disabled
    Screen Shot 04-22-19 at 02.18 PM.PNG
  3. Click "Enabled" beside Active Directory Basic
  4. You will be prompted to enter your password
    Screen Shot 04-22-19 at 02.19 PM.PNG
  5. Click Next
  6. Enter a name for your directory (This will be name of the directory in Workspace ONE Identity). You can leave Enable Custom Mapping to standard
    Screen Shot 04-22-19 at 02.21 PM.PNG
  7. Click Save
  8. If everything worked successfully, you should see your a new directory appear in Workspace ONE Identity with your synchronized users:
    Screen Shot 04-22-19 at 02.22 PM.PNG

 

Step 3: Enable Mobile SSO

  1. Lets go back to the "Getting Started Section" of Workspace ONE UEM
  2. Under Workspace ONE -> Continue
  3. Under Identity & Access Management -> Mobile Single-Sign-On, click Configure
    Screen Shot 04-22-19 at 02.33 PM.PNG
  4. Click "Get Started"
    Screen Shot 04-22-19 at 02.35 PM.PNG
  5. Click Configure to use the AirWatch Certificate Authority
    Screen Shot 04-22-19 at 02.38 PM.PNG
  6. Click Start Configuration
    Screen Shot 04-22-19 at 02.40 PM.PNG
  7. Click Finish when complete
    Screen Shot 04-22-19 at 02.41 PM.PNG
  8. Click Close

Basic Troubleshooting

There are a variety of reasons that Mobile SSO can fail. Lets go over a few of the common reasons.

 

  1. You are prompted for a username/password or the Workspace ONE Domain chooser when doing Mobile SSO
    The problem here is that Mobile SSO has failed and Workspace ONE Identity is triggering the fallback authentication mechanism. For the purpose of troubleshooting, I recommend removing the fallback mechanism. In the IOS  Policy, remove Certificate Authentication and Password (Local Directory). When you test again you will be prompted with an error message instead.
    Screen Shot 04-22-19 at 03.22 PM.PNG
  2. You are prompted with an error  message "Access denied as no valid authentication methods were found"
    a) Check to make sure the "Ios_Sso" profile was pushed to the device. By default, when the profile is created it does not have an assignment group. If not, create an smart group and assign the profile and publish.
  3. You received the error "The required field “Keysize” is missing" when deploying the IOS Mobile SSO Profiless
    Something went wrong with the import of the KDC Certificate from Workspace ONE Identity to UEM.
    a)Log into Workspace ONE Identity -> Identity & Access Management -> Identity Providers -> Built-In and download the KDC Certificate:
    Screen Shot 04-22-19 at 04.20 PM.PNG
    b) Now switch back to UEM, Devices -> Profiles & Resources -> Profiles
    c) Edit the IOS Profile
    d) Click Credentials and re-upload the KDC Certificate.

  4. You received the message "Kerberos NEGOTIATE failed or was cancelled by the user"

    Unfortunately this is a catch all error message for mobile sso failures can could be many things. I'll try to cover some of the common reason here:

    a) In Workspace ONE UEM, check your IOS Mobile SSO profile -> Single Sign-on. Verify the Realm is correct. For production it should be "VMWAREIDENTITY.COM". However if you have localized cloud tenant this can be different (VMWAREIDENTITY.EU, VMWAREIDENTITY.ASIA,  VMWAREIDENTITY.CO.UK, VMWAREIDENTITY,COM.AU, VMWAREIDENTITY.CA, VMWAREIDENITY.DE).  For non-production, you might be on the vidmpreview.com domain. If this is the case, it should be "VIDMPREVIEW.COM"

    b) When you use the wizard to create the Mobile SSO configuration, it will automatically add the application bundle id's where Mobile SSO is allowed. You will need to either enter all your application bundle id's into the profile or optionally delete them all. If you don't specify the bundle id's, it will allow them all.  I recommend for a POC, you leave this blank.

    c) Mobile SSO on IOS is based on Kerberos. The kerberos negotiation works of Port 88 on UDP. Ensure that your firewall is not blocking this port.

    d)The built-in AirWatch Certificate Authority uses the username (usually sAMAccountName) as the principal name on the certificate provisioned to the device. The kerberos negotiation will use the username to formulate a user principle name which needs to match in Workspace ONE Identity. A problem can occur when organizations define their UPN with a different prefix than the sAMAcountName. So if my my username is "jdoe" but my UPN is "john.doe@domain.com". In this scenario, Mobile SSO will fail. In this scenario, we can:

    i) Sync the correct UPN prefix as a customer attribute into Workspace UEM and provision that on the certificate
    ii) Sync sAMAccountName as the UPN in Workspace ONE Identity (Note: This can have potential issues with downstream applications but you can always pull the UPN as a custom attribute as well)
    iii) Use a custom certificate authority in Workspace ONE UEM and configure a kerberos template with the correct values.

I recently came across a customer who had many applications running in clusters which required RDM’s and wanted to automate the process of attaching and sharing the RDM’s between multiple Virtual Machines. PowerCLI being the preferred method for the customer to automate anything, I started out by mapping the steps which had to be performed to successfully attach and share an RDM.

  • Find the available free ports on the SCSI Controller and add a new SCSI Controller if required.
  • Create a custom object to hold all the information about the virtual machine, RDM and SCSI Controller Bus and Port Numbers being used.
  • Use the information captured to attach the RDM on the first Virtual Machine.
  • Capture the new Disk information and share the same device to other Virtual Machines.

(Note : All the functions mentioned below have been written with the assumption that all Virtual Machines are identical in terms of existing storage mapped.)

 

  • First thing first – Setup the parameters for the script call.
    1. PrimaryVirtualMachineName – VM on which the RDM will be added initially.
    2. SecondaryVirtualMachinesName – Comma separated virtual machine names of VM’s with which RDM is to be shared.
    3. PathtoRDMfile – Path to the file containing list of RDM WWN’s.

 

param(

        $PrimaryVirtualMachineName,

        $SecondaryVirtualMachinesName = @(),

        $PathtoRDMfile

)

 

  • Now we will create a function which will create a custom RDM object to hold all the information which is required to successfully attach and share an RDM. This is not actually required, but makes if easier to hold all the required information in a single place and makes it easier to retrieve it when required.

 

function GetVMCustomObject {

param (

        $VirtualMachine,

        $RDMS

)  

$ESXCLI = $VirtualMachine | get-vmhost | Get-EsxCli -V2

$devobject = @()

foreach($RDM in $RDMS)

{

       

        $RDM = 'naa.'+$RDM

        $Parameters = $ESXCLI.storage.core.device.list.CreateArgs()

        $Parameters.device = $RDM.ToLower()

        try{

        $naa=$ESXCLI.storage.core.device.list.Invoke($Parameters)

        write-host found device $naa.device

        $device = New-Object psobject

        $device | add-member -MemberType NoteProperty -name "NAAID" -Value $naa.Device

        $device | add-member -MemberType NoteProperty -name "SizeMB" -Value $naa.Size

        $device | add-member -MemberType NoteProperty -name "DeviceName" -Value $naa.devfspath

        $device | Add-Member -MemberType NoteProperty -name "BusNumber" -Value $null

        $device | add-member -MemberType NoteProperty -name "UnitNumber" -value $null

        #$device | Add-Member -MemberType NoteProperty -Name "Device Key" -Value $null

        $device | add-member -MemberType NoteProperty -name "FileName" -Value $null

        $devobject += $device

 

}

catch

    {

        Write-host $RDM does not exist on host (get-vmhost -vm $VirtualMachine)

        Read-Host "Press any key to exit the Script."

        Exit

}

}

return $devobject

}

 

  • Next up is the function to create a new SCSI controller if required.

 

function CreateScSiController {

param (

        [int]$BusNumber,

        $VirtualMachine

)

$spec = New-Object VMware.Vim.VirtualMachineConfigSpec

$spec.DeviceChange = @()

$spec.DeviceChange += New-Object VMware.Vim.VirtualDeviceConfigSpec

$spec.DeviceChange[0].Device = New-Object VMware.Vim.ParaVirtualSCSIController

$spec.DeviceChange[0].Device.SharedBus = 'physicalSharing'

$spec.DeviceChange[0].Device.ScsiCtlrUnitNumber = 7

$spec.DeviceChange[0].Device.DeviceInfo = New-Object VMware.Vim.Description

$spec.DeviceChange[0].Device.DeviceInfo.Summary = 'New SCSI controller'

$spec.DeviceChange[0].Device.DeviceInfo.Label = 'New SCSI controller'

$spec.DeviceChange[0].Device.Key = -106

$spec.DeviceChange[0].Device.BusNumber = $BusNumber

$spec.DeviceChange[0].Operation = 'add'

$VirtualMachine.ExtensionData.ReconfigVM($spec)

}

 

 

  • Next we will query the existing SCSI controller attached and find the available free ports on the existing SCSI controller and use the function last created to add a new SCSI controller if required.

Note : This function has been written to always start with SCSI controller with highest Bus Number but could be easily modified to use any of the existing controllers.

 

function SCSiFreePorts {

param (

        #Required ports is RDMS.count

$RequiredPorts,

        $PrimaryVirtualMachine,

        $SecondaryVirtualMachines

)

 

$ControllertoUse = @()

$FreePorts = 0;

$AvailablePorts = @()

while ($FreePorts -lt $RequiredPorts) {

        $ControllerNumber = @()

        $Controllers = Get-ScsiController -vm $PrimaryVirtualMachine? {$_.BusSharingMode -eq 'Physical' -and $_.Type -eq 'paravirtual'}

        $LatestControllerNumber = $null

if ($Controllers) {

            foreach ($Controller in $Controllers) {

$ControllerNumber += $Controller.ExtensionData.BusNumber

            }

            $LatestControllerNumber = ($ControllerNumber | measure -Maximum).Maximum

$RecentController = $Controllers | ? {$_.ExtensionData.BusNumber -eq $LatestControllerNumber}

            $FreePorts += 15 - $RecentController.ExtensionData.Device.count

            $ControllertoUse += $RecentController

        }

        if (($FreePorts -lt $RequiredPorts) -and ($LatestControllerNumber -eq 3)) {

            Write-Host "SCSI controller Limit has been exhausted and can not accomodate all RDM's. Exiting the Script."

            Exit

        }

        if (($FreePorts -lt $RequiredPorts) -or !$Controllers) {

            CreateScSiController -BusNumber ($LatestControllerNumber+1) -VirtualMachine $PrimaryVirtualMachine

            foreach($Virtualmachine in $SecondaryVirtualMachines)

            {

                CreateScSiController -BusNumber ($LatestControllerNumber+1) -VirtualMachine $Virtualmachine

            }

}

}

foreach ($CurrentController in $ControllertoUse) {

        $ConnectedDevices = $CurrentController.ExtensionData.Device

        $UsedPort = @()

        foreach ($Device in $ConnectedDevices) {

            $DevObj = $PrimaryVirtualMachine.ExtensionData.Config.Hardware.Device | ? {$_.Key -eq $Device}

            $UsedPort += $DevObj.UnitNumber

        }

        for ($i = 0; $i -le 15; $i++) {

            if (($i -ne 7) -and ($UsedPort -notcontains $i)) {

                $PortInfo = New-Object -TypeName PSObject

                $PortInfo | Add-Member -MemberType NoteProperty -name "BusNumber" -Value $CurrentController.ExtensionData.BusNumber

                $PortInfo | add-member -MemberType NoteProperty -name "PortNumber" -value $i

                $AvailablePorts += $PortInfo

            }

        }

}

return $AvailablePorts

}

 

  • Now the function to add the RDM to the shared machine.

 

function AddRDM {

param (

        $VirtualMachine,

        [String]$DeviceName,

[Int]$ControllerKey,

        [Int]$UnitNumber,

        [Int]$Size

)

$spec = New-Object VMware.Vim.VirtualMachineConfigSpec

$spec.DeviceChange = @()

$spec.DeviceChange += New-Object VMware.Vim.VirtualDeviceConfigSpec

$spec.DeviceChange[0].FileOperation = 'create'

$spec.DeviceChange[0].Device = New-Object VMware.Vim.VirtualDisk

# $SIZE is available in objects returned by GetVMCustomObject, size will be in MB

$spec.DeviceChange[0].Device.CapacityInBytes = $Size*1204*1024

$spec.DeviceChange[0].Device.StorageIOAllocation = New-Object VMware.Vim.StorageIOAllocationInfo

$spec.DeviceChange[0].Device.StorageIOAllocation.Shares = New-Object VMware.Vim.SharesInfo

$spec.DeviceChange[0].Device.StorageIOAllocation.Shares.Shares = 1000

$spec.DeviceChange[0].Device.StorageIOAllocation.Shares.Level = 'normal'

$spec.DeviceChange[0].Device.StorageIOAllocation.Limit = -1

$spec.DeviceChange[0].Device.Backing = New-Object VMware.Vim.VirtualDiskRawDiskMappingVer1BackingInfo

$spec.DeviceChange[0].Device.Backing.CompatibilityMode = 'physicalMode'

$spec.DeviceChange[0].Device.Backing.FileName = ''

$spec.DeviceChange[0].Device.Backing.DiskMode = 'independent_persistent'

$spec.DeviceChange[0].Device.Backing.Sharing = 'sharingMultiWriter'

#Device name is in the format /vmfs/devices/disks/naa.<LUN ID>

$spec.DeviceChange[0].Device.Backing.DeviceName = $DeviceName

#Controller key to be retrieved at run time using controller bus number

$spec.DeviceChange[0].Device.ControllerKey = $ControllerKey

#Unit number is the controller port and will be provided by SCSiFreePorts function

$spec.DeviceChange[0].Device.UnitNumber = $UnitNumber

# $SIZE is available in objects returned by GetVMCustomObject, size will be in MB

$spec.DeviceChange[0].Device.CapacityInKB = $Size*1204

$spec.DeviceChange[0].Device.DeviceInfo = New-Object VMware.Vim.Description

$spec.DeviceChange[0].Device.DeviceInfo.Summary = 'New Hard disk'

$spec.DeviceChange[0].Device.DeviceInfo.Label = 'New Hard disk'

$spec.DeviceChange[0].Device.Key = -101

$spec.DeviceChange[0].Operation = 'add'

return $VirtualMachine.ExtensionData.ReconfigVM_Task($spec)

}

 

  • To share the RDM between Virtual Machines, we will use below function.

 

function ShareRDM {

param (

        $VirtualMachine,

        [String]$FileName,

        [Int]$ControllerKey,

        [Int]$UnitNumber,

        [Int]$Size

)

$spec = New-Object VMware.Vim.VirtualMachineConfigSpec

$spec.DeviceChange = @()

$spec.DeviceChange += New-Object VMware.Vim.VirtualDeviceConfigSpec

$spec.DeviceChange[0] = New-Object VMware.Vim.VirtualDeviceConfigSpec

$spec.DeviceChange[0].Device = New-Object VMware.Vim.VirtualDisk

# $SIZE is available in objects returned by GetVMCustomObject, size will be in MB

$spec.DeviceChange[0].Device.CapacityInBytes = $Size*1204*1024*1024

$spec.DeviceChange[0].Device.StorageIOAllocation = New-Object VMware.Vim.StorageIOAllocationInfo

$spec.DeviceChange[0].Device.StorageIOAllocation.Shares = New-Object VMware.Vim.SharesInfo

$spec.DeviceChange[0].Device.StorageIOAllocation.Shares.Shares = 1000

$spec.DeviceChange[0].Device.StorageIOAllocation.Shares.Level = 'normal'

$spec.DeviceChange[0].Device.StorageIOAllocation.Limit = -1

$spec.DeviceChange[0].Device.Backing = New-Object VMware.Vim.VirtualDiskRawDiskMappingVer1BackingInfo

#FileName is the disk filename to be shared in [<Datastore name>] VM Name/disk name.vmdk, to be retrieved at runtime using vm view and device bus number and Unit number

$spec.DeviceChange[0].Device.Backing.FileName = $FileName

$spec.DeviceChange[0].Device.Backing.DiskMode = 'persistent'

$spec.DeviceChange[0].Device.Backing.Sharing = 'sharingMultiWriter'

#Controller key to be retrieved at run time using controller bus number

$spec.DeviceChange[0].Device.ControllerKey = $ControllerKey

#Unit number is the controller port and will be provided by SCSiFreePorts function

$spec.DeviceChange[0].Device.UnitNumber = $UnitNumber

# $SIZE is available in objects returned by GetVMCustomObject, size will be in MB

$spec.DeviceChange[0].Device.CapacityInKB = $Size*1204*1024

$spec.DeviceChange[0].Device.DeviceInfo = New-Object VMware.Vim.Description

$spec.DeviceChange[0].Device.DeviceInfo.Summary = 'New Hard disk'

$spec.DeviceChange[0].Device.DeviceInfo.Label = 'New Hard disk'

$spec.DeviceChange[0].Device.Key = -101

$spec.DeviceChange[0].Operation = 'add'

return $VirtualMachine.ExtensionData.ReconfigVM_Task($spec)

}

 

To stitch it all together we just have to call the created functions as and when required, but a little pre-checks first.

Note : These pre-checks are not exhaustive, these were built in to satisfy customer specific requirements, more checks and balances could be added.

 

Now we do not want to just start modifying things without making sure that the virtual machines are powered off, do we?

 

$PrimaryVirtualMachine = Get-VM -Name $PrimaryVirtualMachineName

if($PrimaryVirtualMachine.PowerState -ne 'PoweredOff')

{

Read-Host -Prompt $PrimaryVirtualMachineName' is not Powered Off. Make sure all the Virtual Machines are Powered Off before running the script again. Press any key to exit.'

Exit

}

$SecondaryVirtualMachines = @()

foreach($VM in $SecondaryVirtualMachinesName)

{

$SecondaryVM = Get-VM -name $VM

if($SecondaryVM.PowerState -ne 'PoweredOff')

{

Read-Host -Prompt $VM' is not Powered Off. Make sure all the Virtual Machines are Powered Off before running the script again. Press any key to exit.'

Exit

}

$SecondaryVirtualMachines += $SecondaryVM

}

 

We also know that Virtual Machines support a total of 64  disks out of which 4 are IDE’s, so we will check if we have enough ports available to successfully attach all the given RDM’s.

$RDMS = Get-Content -path $PathtoRDMfile

$AttachedDisks = $PrimaryVirtualMachine | Get-HardDisk

if(($AttachedDisks.Count+$RDMS.count) -gt 60)

{

Read-Host -Prompt 'Configuration maximum for disks reached. Can not attach all provided disks. Press any key to exit.'

exit

}

 

Lets find out the Bus Numbers and Port Number that we will use to attach the RDM’s

 

$DeviceObjects = GetVMCustomObject -VirtualMachine $PrimaryVirtualMachine -RDMS $RDMS

$PortsAvailable = SCSiFreePorts -RequiredPorts $RDMS.Count -PrimaryVirtualMachine $PrimaryVirtualMachine -SecondaryVirtualMachines $SecondaryVirtualMachines

 

for($i = 0; $i -lt $RDMS.Count; $i++)

{

$CurrentObject = $DeviceObjects[$i]

$PorttoUse = $PortsAvailable[$i]

$CurrentObject.UnitNumber = $PorttoUse.PortNumber

$CurrentObject.BusNumber = $PorttoUse.BusNumber

}

 

Now we will use all this collected information and finish the job.

 

foreach($DiskObject in $DeviceObjects)

{

$Controller = Get-ScsiController -VM $PrimaryVirtualMachine | ? {$_.ExtensionData.BusNumber -eq $DiskObject.BusNumber}

$task = AddRDM -VirtualMachine $PrimaryVirtualMachine -DeviceName $DiskObject.DeviceName -ControllerKey $Controller.ExtensionData.Key -UnitNumber $DiskObject.UnitNumber -Size $DiskObject.SizeMB

Start-Sleep -Seconds 5

$PVM = Get-VM -Name $PrimaryVirtualMachineName

$Disk = $PVM.ExtensionData.Config.Hardware.Device | ? {($_.UnitNumber -eq $DiskObject.UnitNumber) -and ($_.ControllerKey -eq $Controller.ExtensionData.Key)}

$DiskObject.FileName = $Disk.Backing.FileName

foreach($VM in $SecondaryVirtualMachines)

{

        $SController = Get-ScsiController -VM $PrimaryVirtualMachine | ? {$_.ExtensionData.BusNumber -eq $DiskObject.BusNumber}

        ShareRDM -VirtualMachine $VM -FileName $Disk.Backing.FileName -ControllerKey $SController.ExtensionData.Key -UnitNumber $DiskObject.UnitNumber -Size $DiskObject.SizeMB

 

}

 

}

Write-Host "RDM's have been added on All VirtualMachines with Below Details"

Write-Host $DeviceObjects | Select NAAID,BusNumber,UnitNumber

 

Now just save the file and run is as below -

 

<path the script><scriptname.ps1> -PrimaryVirtualMachineName <VM Name> -SecondaryVirtualMachinesName  <VM Name>,<VM Name>,<VM Name> -PathtoRDMfile  <RDM File Path>

 

This script has been tested in Lab with 1 Primary and 2 Secondary Virtual Machines and upto 10 RDM devices. Below are the specific use cases tested.

 

  • RDM attachment with no existing Physical mode SCSI Controller.
  • RDM attachment with existing Physical mode SCSI Controller with no existing RDM.
  • RDM attachment with existing Physical mode SCSI controller with serially attached RDM.
  • RDM attachment with existing Physical mode SCSI controller with randomly attached RDM.
  • RDM attachment across multiple Physical mode SCSI controllers if the existing controller does not have enough ports available.

 

I understand there could be better and/or easier ways to do so, the script might also be modified to be more efficient, any suggestion is welcome, I have attached the completed script to the post, feel free to use/modify as deemed fit.

We come across the scenario quite often when customers want to leverage Microsoft Authenticator when using Workspace ONE UEM and/or Horizon.

 

In this blog, I'd like to go through the various options and outline the user experience with each of the options.

 

The  main uses case we see are:

 

  • Microsoft MFA for Horizon Desktop
  • Microsoft MFA for SaaS Applications federated directly with Workspace ONE.
  • Microsoft MFA for Device Enrollment in Workspace ONE UEM
  • Microsoft MFA for SaaS Applications federated with Azure AD. (Including Office 365)

 

There are 3 integration options that you can consider to integrate Microsoft Authenticator with Workspace ONE. The use cases previously mentioned can fit into one ore more of the following integration options.

 

1. Azure AD as a 3rd Party IdP in Workspace ONE

 

Use Cases:

  • Microsoft MFA for Horizon Desktop
  • Microsoft MFA for SaaS Applications federated directly with Workspace ONE.
  • Microsoft MFA for Device Enrollment in Workspace ONE UEM

 

Use Cases not Supported:

  • Microsoft MFA for SaaS Applications federated with Azure AD. (Including Office 365)

 

 

In this option, the following needs to be configured:

  • Azure AD configured as a 3rd Party IdP in Workspace ONE
  • Workspace ONE configured as an enterprise app in Azure
  • Conditional Access Policy Configured in Azure AD to require Microsoft Authenticator for the Workspace ONE Application.

 

Screen Shot 04-17-19 at 03.11 PM.PNG

Lets walk through the authentication flow in this option:

  1. The user will access their Horizon Desktop (or any application that is federated directly with Workspace ONE).

  2. The application will send a SAML Authentication Request to Workspace ONE
  3. Assuming the access policy in Workspace ONE is configured for Azure Authentication, the user will be redirected to Azure AD.
  4. The user will enter their email address.
  5. Assuming the domain is not currently federated with another IdP, Azure will prompt the user to enter their password.
  6. Azure conditional access policies will then trigger for Microsoft MFA.
  7. The user will be returned to Workspace ONE and subsequently authenticated to Horizon. (Note: Horizon should be configured with TrueSSO for optimal user experience).

 

2. Workspace ONE as a Federated Domain in Azure AD

 

Use Cases:

  • Microsoft MFA for SaaS Applications federated with Azure AD. (Including Office 365)

 

 

Use Cases not supported:

  • Microsoft MFA for Horizon Desktop
  • Microsoft MFA for SaaS Applications federated directly with Workspace ONE.
  • Microsoft MFA for Device Enrollment in Workspace ONE UEM

 

 

 

In this option, the following needs to be configured:

  • Azure domain must be federated to Workspace ONE
  • Conditional Access Policy Configured in Azure AD to require Microsoft Authenticator for the Workspace ONE Application.
  • Mobile SSO/Certificate Authentication Configured in Workspace ONE

Screen Shot 04-17-19 at 05.29 PM.PNG

Lets walk through the authentication flow in this option:

  1. The user will access Office 365 (or any application federated with Azure AD).
  2. The user will enter their email address.
  3. The user will be redirected to Workspace ONE
  4. Workspace ONE will authenticate the user using Mobile SSO, Certificate or some other authentication mechanism (as well as checking device compliance).
  5. Workspace ONE will respond with a successful response back to Azure AD.
  6. Azure conditional access policies will then trigger for Microsoft MFA.
  7. The user will be successfully authenticated into Office 365 (other other Azure federated application).

 

3. Workspace ONE with Microsoft Azure MFA Server

 

Use Cases:

  • Microsoft MFA for Horizon Desktop
  • Microsoft MFA for SaaS Applications federated directly with Workspace ONE.
  • Microsoft MFA for Device Enrollment in Workspace ONE UEM
  • Microsoft MFA for SaaS Applications federated with Azure AD. (Including Office 365)*

          *For Office 365 (and other apps federated with Azure), the Azure domain must be federated with Workspace ONE.

 

Use Cases not supported:

  • N/A

 

In this option, the following needs to be configured:

  • Azure MFA Server downloaded and installed on premises.
  • Workspace ONE Connector installed on premise.
  • Workspace ONE configured as a radius client in Azure MFA Server

 

 

Screen Shot 04-17-19 at 05.41 PM.PNG

Lets walk through the authentication flow in this option:

  1. The user will access any application federated with Workspace (or Horizon/Citrix application).
  2. Workspace ONE will prompt for their username/password
  3. After clicking "Sign-In", a radius call via the connector will be made to the Microsoft Azure MFA Server
  4. The MFA server will push a notification to the device to approve the request:

If you have configured Okta as a 3rd Party IDP in Workspace ONE you might have noticed that the "Logout" function in Workspace ONE doesn't log you out of your Okta session. The reason for this is that Okta does not include the "SingleLogoutService" by default in the metadata that is used when creating the 3rd Party IDP in Workspace ONE.

 

There are a couple extra steps that you need to do to enable this functionality. Before you begin, please make sure you download your signing certificate from Workspace ONE.

 

  1. Log into Workspace ONE
  2. Click on Catalog -> Settings (Note: Don't click the down arrow and settings)
    Screen Shot 04-17-19 at 10.55 AM.PNG
  3. Click on SAML Metadata
  4. Scroll down to the Signing Certificate and Click Download
    Screen Shot 04-17-19 at 11.01 AM.PNG

Now you will need to log into your Okta Administration Console.

  1. .Under Applications -> Click on the Workspace ONE application that you previously created
    Screen Shot 04-17-19 at 11.04 AM.PNG
  2. Click on the General Tab
  3. Under SAML Settings -> Click Edit
  4. Click Next
  5. Click on "Show Advanced Settings"
    Screen Shot 04-17-19 at 11.06 AM.PNG
  6. Enable the Checkbox that says "Enable Single Logout"
    Screen Shot 04-17-19 at 11.07 AM.PNG
  7. Under "Single Logout URL", enter:  "https://[WS1Tenant]/SAAS/auth/saml/slo/response"
    Screen Shot 04-17-19 at 11.09 AM.PNG
  8. Under SP Issuer, copy the value you have configured for Audience URI (SP Entity ID). This value should be: "https://[WS1Tenant]/SAAS/API/1.0/GET/metadata/sp.xml"
    Screen Shot 04-17-19 at 11.12 AM.PNG
  9. Under "Signature Certificate", browse to the location you downloaded the Workspace ONE certificate in the previous steps.
  10. Click Upload Certificate
  11. Click Next
  12. Click Finish
  13. Click on the "Sign On" tab
  14. Click on Identity Provider Metadata
    Screen Shot 04-17-19 at 11.15 AM.PNG
  15. You will notice that your Identity Provider Metadata now includes the SingleLogoutService:
    Screen Shot 04-17-19 at 11.19 AM.PNG
  16. Copy this metadata.

 

Now switch back to Workspace ONE

 

  1. Go to Identity & Access Management
  2. Click on Identity Providers
  3. Click on your Okta 3rd Party IDP you previously created
  4. Paste your new Okta Metadata and click "Process IdP Metadata"
    Screen Shot 04-17-19 at 11.22 AM.PNG
  5. Scroll down to "Single Sign-out Configuration" and check "Enable". (Note: Make sure the other two values are left blank)
    Screen Shot 04-17-19 at 11.24 AM.PNG

Now you should be able to logout from Workspace ONE and be signed out of both solutions.

 

Screen Shot 04-17-19 at 11.25 AM.PNG

VMware's Workspace ONE provides a digital workspace platform with a seamless user experience across any application on any device. Users can access a platform native catalog to download and install applications regardless of whether its an IOS, Android, Win10 or MacOS device. They can access both Web and SaaS applications as well as their Virtualized applications including Horizon and Citrix.  Workspace ONE is designed to keep the user experience "Consumer Simple" while keeping the platform "Enterprise Secure".

 

VMware promotes the "Zero-Trust" approach when accessing corporate applications. Workspace ONE Unified Endpoint Management is a critical element to achieve a zero-trust model to ensure the device itself is secure enough to access your corporate data.  However, to achieve a zero-trust model we need to include both the Device Trust and the Identity Context.  This is where the Risk-Based Identity Assurance offered by RSA SecurID Access becomes the perfect complement to Workspace ONE.

 

RSA SecurID Access makes access decisions based on sophisticated machine learning algorithms that take into consideration both risk and behavioral analytics. RSA SecurID Access offers a broad range of authentication methods including modern mobile multi-factor authenticators (e.g., push notification, one-time password, SMS and biometrics) as well as traditional hard and soft tokens.

 

I'm pretty excited about the integration between Workspace ONE and RSA SecurID Access because its offers extreme flexibility to control when and how multi-factor authentication will be used. After the initial setup, it also allows me to control everything from Workspace ONE.

 

RSA SecurID Access provides 3 levels of assurance that you can leverage within your access policies. You have full control to modify the authenticators into the appropriate levels based on your licensing from RSA.

 

Screen Shot 04-15-19 at 02.09 PM.PNG

 

You can create Access Policies in RSA SecurID Access that will map to the appropriate assurance levels:

 

Screen Shot 04-15-19 at 02.14 PM.PNG

 

In my environment, I've created 3 policies:

Screen Shot 04-15-19 at 03.09 PM.PNG

Once you've completed your access polices you can then add your Workspace ONE tenant as an relying party.

 

Screen Shot 04-15-19 at 05.11 PM.PNG

 

Now this is where things get really interesting and you'll see why i'm excited about this integration. Its fairly common for a digital workspace or web portal to call out to an MFA provider to perform the necessary authentication and return the response. The problem that typically comes into play is whether the authenticators being used for MFA are too much or too little for the application being accessed.  In most cases, the MFA provider is not aware of what application is being accessed and is only responding the call from the relying party.  Keep in mind that "User Experience" is at the forefront of the Workspace ONE solution.

 

The integration between Workspace ONE and RSA SecurID Access allows us to control which Access Policy (or level of assurance) will be used from within Workspace ONE.

 

In Workspace ONE, we can create the same policies that we did in RSA SecurID Access:

Screen Shot 04-15-19 at 02.46 PM.PNG

 

In Workspace ONE we can directly assign Web, SaaS or Virtual applications that require High Assurance into the "High Assurance" access policy and apps that require "Medium or Low Assurance" into the appropriate policy. When applications are accessed in Workspace ONE, it will automatically send the request to RSA SecurID Access with the requested policy to use for authentication.

 

So how does Workspace ONE specify which policy RSA SecurID should use for authentication? Its actually quite simple.  The integration between Workspace ONE and RSA SecurID Access is based on SAML.

 

Initial authentication into Workspace ONE will typically come from Mobile SSO or Certificate Based Authentication (although other forms of authentication are available). After the initial authentication or once the user clicks on a specific application, Workspace ONE will send a SAML Authentication Request which will include the subject who needs additional verification:

 

<saml:Subject xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion">

        <saml:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified">steve</saml:NameID>

</saml:Subject><samlp:NameIDPolicy AllowCreate="false"

 

When the SAML Request is sent from Workspace ONE, it will also include the access policy as part of the SAML AuthnContextClassRef:

 

<saml:AuthnContextClassRef xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion">urn:rsa:names:tc:SAML:2.0:ac:classes:spec::LowWS1</saml:AuthnContextClassRef>

</samlp:RequestedAuthnContext>

 

You can see in the AuthnContextClassRef we are specifying the specific policy that RSA SecurID Access should use for authentication. 

 

When you create a 3rd Party IDP for RSA SecurID Access, you can specify the AuthnContextClassRef when defining the authentication methods:

Screen Shot 04-15-19 at 05.02 PM 001.PNG

Screen Shot 04-15-19 at 05.03 PM.PNG

 

I've actually left out a key element of the RSA SecurID Access solution, which is the Risk Level. Even though we've specifically called out the Low Assurance Policy, we can have RSA dynamically change that to High based on the user's risk score. RSA SecurID Access can use an "Identity Confidence" score to choose the appropriate assurance level. This is configured in the access policy:

 

Screen Shot 04-17-19 at 01.45 PM.PNG

 

By leveraging RSA SecurID Access with VMware Workspace ONE we can now have risk-based identity assurance on a per app level within Workspace ONE. For current Workspace ONE customers, this integration is based on SAML so it does not require radius and has no dependency on the VIDM Connector.

 

Together this keeps the user experience great on apps that might not need a high level of assurance and keeps the enterprise secure on the apps that require the high level of assurance.

1 2 Previous Next

Actions

Looking for a blog?

Can't find a specific blog? Try using the Blog page to browse and search blogs.