Skip navigation

Blog Posts

Total : 4,339

Blog Posts

1 2 Previous Next
jatinjsk Enthusiast

Get Naa ID and Name in PowerShellers

Posted by jatinjsk Jan 23, 2020

$DSList = Get-Datastore |where {$_.Type -eq 'VMFS'}

$output= ""|Select Name, NaaID

foreach($ds in $DSList)



    $output.Name =$ds.Name



    $output|Export-csv DSList.csv -Append -NoTypeInformation


$DSList = Get-Datastore |where {$_.Type -eq 'VMFS'}

$output= ""|Select Name, NaaID

foreach($ds in $DSList)



    $output.Name =$ds.Name



    $output|Export-csv DSList.csv -Append -NoTypeInformation



$DSList = Get-Datastore |where {$_.Type -eq 'VMFS'}$output= ""|Select Name, NaaIDforeach($ds in $DSList){
    $output.Name =$ds.Name    $output.NaaID=$    $output     $output|Export-csv DSList.csv -Append -NoTypeInformation}

$DSList = Get-Datastore |where {$_.Type -eq 'VMFS'}

$output= ""|Select Name, NaaID

foreach($ds in $DSList)



    $output.Name =$ds.Name



    $output|Export-csv DSList.csv -Append -NoTypeInformation


$DSList = Get-Datastore |where {$_.Type -eq 'VMFS'}

$output= ""|Select Name, NaaID

foreach($ds in $DSList)



    $output.Name =$ds.Name



    $output|Export-csv DSList.csv -Append -NoTypeInformation


$DSList = Get-Datastore |where {$_.Type -eq 'VMFS'}

$output= ""|Select Name, NaaID

foreach($ds in $DSList)



    $output.Name =$ds.Name



    $output|Export-csv DSList.csv -Append -NoTypeInformation


$DSList = Get-Datastore |where {$_.Type -eq 'VMFS'}

$output= ""|Select Name, NaaID

foreach($ds in $DSList)



    $output.Name =$ds.Name



    $output|Export-csv DSList.csv -Append -NoTypeInformation


APIs can be very useful to automate processes and integrate systems. VMware Workspace ONE Access has a full set of REST APIs that you can leverage.


The steps below will show you the basic steps to connect to a Workspace ONE Access server and send an API request:


1. Login to your Workspace ONE Access environment as admin. On this example I am using Google Chrome, so the following options may vary if you are using a different browser.


Screenshot 2020-01-22 at 13.27.47.png


2. On the "Dashboard" page, press F12 to view the Developer Tools. Alternatively, navigate to Menu (tree dots) > More Tools > Developer Tools.


Screenshot 2020-01-23 at 10.08.35.png



3. Select the Application tab and then expand Cookies.


Screenshot 2020-01-22 at 15.54.09.png


4. Under Cookies, select your IDM URL, highlight the HZN cookie and copy its Value.


Screenshot 2020-01-23 at 10.10.27.png


5. Open your API client tool. On this example I am using Postman (


6. Select your API request method (e.g. GET) and enter the URL for it. Under the Header tab, enter the following:


Key: Authorization

Value: HZN <Cookie value copied on step 4>


Screenshot 2020-01-22 at 14.00.24.png



7. Enter any other required fields (depending on your request) and click Send.



More information, including a list of the API calls that can be used with VMware Workspace ONE Access, can be found at:






The postings on this site are my own and do not represent VMware’s positions, strategies or opinions.

part 3 of 3

Part 1

Part 2

C. Release Alpha/ Beta track app to Production

1. In the Google Play console, while your app is selected, go to Release Management\ App releases. In Alpha, select “Manage”.


2. Select “Release to Production” at the Release section.


3. You will see the new release to production page.


Scroll to the bottom and click “Save”, then “Review”.


4. Click “Start Rollout to Production”. This will release the Alpha/Beta apk to the Production track.


5. The Alpha (or Beta) track will now be empty and show it was promoted to Production.


In UEM, all the devices that have the Prod version assigned to it will see that the update is available on the Managed Play store.

In the case where the Alpha(Beta) track is superseded, devices in the Alpha (Beta) track will get the Production version of the app.

UEM currently whitelists the track it sees the device is first assigned to in UEM (following the priority in Assignments of the app in UEM).


Note: It may take time for any new version of the app uploaded in Play console (or via Workspace ONE in iFrame) to get automatically installed on the work profile. Refer to this Google article. To manually install the available update, end-user can go to the Managed Play store and see updates available in My work apps\ Updates section.


Manage app updates - Managed Google Play Help

Part 2 of 3

Part 1

Part 3

B. Add a New App Version

The steps below outlines steps to publishing apps to the alpha or beta testing tracks in Google Play console, then assigning those to Workspace ONE UEM smartgroups.

1. Login to the Workspace ONE UEM console

2. Go to Apps & Books\ Applications\ Native\ Public\ Click “Add Application”

3. Select Platform: Android. Name can be kept blank. Click “Next”

4. Select the private apps icon on the left.



5. Click “Make advanced edits” under Advanced editing options. This will take you to the Google Play console login page.


6. After logging in to the Google Play console using the google account tied to your Workspace ONE tenant, go to your app and navigate to Release management\ App release. You can select alpha or beta track. In this example, we will add an apk to the Alpha track. Click “Manage” in Alpha track.


7. In organizations, click “Edit”


8. Check the organization corresponding to the Workspace ONE organization group. Click “Done”.


9. Click “Edit Release”


10. Add the apk file. After adding the apk, you will see details about the version code and size of the file.


11. Click “Save” at the bottom of the page, then “Review”.


12. View any of the warning messages and make changes to the app, as necessary.



13. Click “Start Rollout”, then “Confirm” at the pop-up window.


14. In UEM console, select the app under Apps & Books\ Native\ Click “Assign”


15. Click “Add Assignment”


16. Select the Assignment Group who you want to get the new version (alpha) of the app. Enable Managed Access, select Alpha as Pre-release version. Click “Add”.


17. In the verification screen move up priority of your group where the pre-release is assigned.


18. Then click “Save and Publish”


19. Click “Publish” to confirm the assignment. This will make the alpha version of the app available to the devices belonging to the smart group chosen in Step 16.


Note: Same process can be done for Beta release


(continues to Part 3)

Part 1 of 3

Part 2

Part 3


This step-by-step guide shows how to upload internal apps (apks you’ve developed) to the Android Managed Play Store for your organization via Workspace ONE. Subsequent sections also show how to add other versions for Alpha/ Beta testing in the Google Play console, then manage assignment of those versions to specific devices/ users in Workspace ONE.



  1. Workspace ONE environment already registered to Android EMM


  2. Apk file with an application ID that has not been published in the Android public play store.


A. Publish a New Application

1. Login to the Workspace ONE UEM console.

2. Go to Apps & Books\ Applications\ Native\ Public\ Click “Add Application”.

3. Select Platform: Android. Name can be kept blank. Click “Next”.


4. Select the private apps icon on the left.


5. Click the “+” button to add a new app.


6. Make sure to add a Name, then select “Upload APK”.


The “Create” button will be enabled if the app can be uploaded.


7. You will see the app in the Private apps section, and a notification that publishing in your store may take up to 10 minutes.


8. Close this screen. The app you just uploaded will be the app list under Public apps.


(Optional) To edit the logo shown in the console, click on the pencil icon beside the app.  Note that this only updates the icon in UEM, not in the Play store.



9. Save and assign the app.


10. Click Add Assignment


11. Pick the organization group/ smart group you would like to assign the app to. Click add.


12. Update Assignment pop-up will appear. Click “Save and Publish” to confirm. Then “Publish” at the Preview Assigned Devices page.



13. You will return to the app list screen. If the deployment is set to “Automatic”, app will get installed automatically on the device and show in both the Workspace ONE Hub/ Catalog and the  Google Play Store.




(continues to Part 2)







ブログ名:【VxRail】既存環境からvSAN 環境へのMigration:その① 【イントロダクション】










最初にブログを作成するときにASCII文字だけでタイトルを設定し、Save Draftで保存します。








So, in 2020 I started this blog, my goal is to share some of my thoughts and predictions about where technology and the usage of that technology is going. I will try to share my thoughts on harnessing the technology and process currently available to fast track the delivery of services to become a strategic partner to business.


A little about me, I have been actively been working on some type of computer since 1991, while still at high school I developed a passion for blinking lights, if it be on a hard drive or a network router. Since then I have been developing skills to allow me to pursue any opportunity that presented itself. What I have found is that my skill set being extremely broad yet focused around process has allowed me to understand many aspects of Information Technology and allows me to adapt and learn as new technology comes to the fore.


During my career I have had the opportunity to act in many roles, these roles linked in some way to a business benefit that was either being developed, improved or being maintained. Customer Satisfaction and happiness in directly proportionate to how well these services meet expectations, you may have noticed I did not say requirements? Requirements are documented and measurable, what I am talking about is the perception of the service is meeting the business requirement. My experience has led me to cut though all the smoke and mirrors and evaluate if the expectation from a user can be met in a cost effective and efficient manner that will continue to deliver value to all parties in the future.


I will not get it right every time, but my goal is to provide a unique perspective on all things technology that we interface with and leverage to get us ahead of the pack. There will be some alignment with trends and technology where I see it adding value, but in my current role my perspective of where the industry is and what works or doesn’t is appreciated. Don’t take my perspective and forcefully apply it to your use cases, you need to develop your own perspective and understanding of your unique challenges.

Sharing you my script display the VMware tools version and status


From a Host or cluster level

Get-VM -Location "” | % { get-view $ } | 
Select Name, @{Name=“ToolsVersion”; Expression={$}}, @{ Name=“ToolStatus”; Expression={$_.Guest.ToolsVersionStatus}} | Sort-Object Name


From a VM list

$vmNames = Get-Content -Path D:\vmNames.txt
Get-VM -Name $vmNames | % { get-view $ } | select Name, @{Name=“ToolsVersion”; Expression={$}}, @{ Name=“ToolStatus”; Expression={$_.Guest.ToolsVersionStatus}} | Sort-Object Name


VMware Support Insider





Transferring files through vSphere Client might fail (2147256)


vSphere Client経由でのデータストアへのファイルのUploadやOVA/OVFのデプロイが失敗する、という内容のKBで、サポート観点でいえば、問い合わせが来るたびに「あー、あれね。」となるような事象です。







ovf error.PNG






自分のサポートエンジニアとしての未熟さを痛感したとともに、VMware Support Insider  をフォローしてよかった、と思った次第です。



### (オマケ) vmware KB 豆知識

vmwareのKBにはCreate DateとLast Update Dateが確認できますが、Create Date = Publish Date(External)ではないので注意が必要です。

Create Dateはあくまでも、KB自体が最初に作成されたタイミングのようで、外部向けに公開された日付とは限りません。

たとえば、 はCreate Dateが2019/6/10になってますが、このKBはvSphere 6.7U3での改善を紹介しているものですので、vSphere 6.7U3の公開(2019/8/20)よりも前に公開されてはいませんでした。

ふつうはCreate DateやLast Modified Dateを気にする場面は少ないと思いますが、もし確認される際は↑の事実をご留意ください。









vSANの場合は基本的にはThin Provisioningとなりますので、vSAN上の仮想マシンが仮想ディスクを消費するたびに領域が少しずつ割り当てられていく、という仕組みになっています。

一度割り当てられた領域は割り当て済みとなるため、たとえその領域が使用されていなかったとしてもあとから回収することはできません。(※vSAN 6.7U1以前)

vSANにはThinではなくLazy zero-ed Thickのようにすることはできますが、デフォルトはThinなので特に指定しない限り上記のような動作になります。














この機能はvSAN 6.7U1からサポートされました。(デフォルトでは無効)




Unmapは前述のとおりvSAN 6.7U1からの機能となりますので前提条件として、ESXi/vCenterがともにvSphere 6.7U1以降であり、vsan disk formatも7以上である必要があります。


またUnmap機能はデフォルトで無効になっていますが、使用方法はVMware Docから確認できます。

Reclaiming Space with SCSI Unmap

UNMAP/TRIM Space Reclamation on vSAN | vSAN Space Efficiency Technologies | VMware



    • 前提条件を満たしていることを確認
    • RVCにて機能を有効化する。
    • 各GuestOSごとの設定










CentOS7 + LVM 利用の場合





具体的には、/etc/lvm/lvm.conf ファイルの

issue_discards = 0


issue_discards = 1





具体的には、/etc/fstab を編集して対象のファイルシステムにdiscardオプションをつけてあげる必要があります。

私の環境の場合はLVMで作成した/dev/mapper/centos-home をXFSファイルシステムでフォーマットして、/homeとしてマウントしていましたので、fstabには以下のように設定されてました。

/dev/mapper/centos-home /home                   xfs     defaults


/dev/mapper/centos-home /home                   xfs     defaults,discard





VMware Docによるとfstrimコマンドで実施することが推奨とありました。





(コマンド)# fstrim -v /home

(出力)/home: 1.7 TiB (1838816620544 bytes) trimmed

※ -v オプションはつけなくてもいいのですが、つけないと何も出力が返ってこないので、私はつけるようにしてます。(なんとなくの安心感があるので。)









Unmap IOを確認する

vSANのパフォーマンス情報にはUnmap IO情報が個別の項目として存在するのでそこから確認可能です。

Unmap IOは対象の仮想マシンが稼働するホスト上で記録されます。

つまり、Unmapを実行した仮想マシンAが、ESXi Bで稼働していた場合は、ESXi BのvSAN統計情報を確認する必要があります。

vSphere Client(HTML5)にログインし、ホストとクラスタのViewから、ESXi Bを選択し、監視→vSAN → パフォーマンスから確認できます。







以下がUnmap後の情報です。Unmap IOPSが増えており、同じタイミングでTrim/Unmap スループットが記録されています。

Unmap IOがあるうちはUnmap進行中、Unmap IOがなくなったら完了、と言えそうです。




Unmapの結果については、vSphere Client(HTML5)から、vSANの空き容量が増えていることを確認したり、対象の仮想マシンがデータストア上で消費する容量が減っていることで確認できます。








私のラボ環境ではDell EMCのRP4VMでReplicationを行っている仮想マシンに関してはUnmapが動作しませんでした。





Unmapは地味に大事な機能だと思いますので、vSAN 6.7U1以前のVersionをお使いの場合は、ぜひともUpgradeをご検討ください。



vEXPERT 2019の山辺です。日本の vExperts Advent Calendar 2019 の21日目の投稿です。


早いもので、2019年もAdvent Calendarの季節ですね。昨年の vExperts Advent Calendar 2018 では、ひとりでも多くの方に、VMware vSphere 環境で、GPUパワーを享受できる「NVIDIA 仮想GPUソリューション」を知って頂きたく、仮想GPU入門的な投稿をさせて頂きました。

(昨年の投稿に興味ある方は こちら!)



1)”Windows10 VDI は GPUが必要”って言ってるけど、本当に使われてるの?

2) NVIDIAの仮想GPUソリューションを活用する新たなワークロード

◆GPU:Wikiさんを引用させて頂くと、【Graphics Processing Unit(グラフィックス プロセッシング ユニット、略してGPU)は、コンピュータゲームに代表されるリアルタイム画像処理に特化した演算装置ないしプロセッサである。】とのこと。

- Wikipedia: Graphics Processing Unit -


1)”Windows10 VDI は GPUが必要”って言ってるけど、本当に使われてるの?

Windows10の時代は、Windows7までのOSやアプリケーションと比べ、Microsoft OfficeやWebブラウザを中心に使用するビジネスユーザーでもGPUパワーが不可欠だよ!っていう説明を昨年しましたが、実際に、それを体感し、NVIDIA仮想GPUソリューションを、OA用途のVDI環境で採用する団体企業はどんどん増えています。いくつか、事例として公開されているものがあるので、以下に案内しますね。


【NVIDIA 仮想 GPUソリューション 成功事例】

ハーゲンダッツ ジャパン様東急リバブル様 


GPU搭載のVDIといえば、CADワークステーションでしょ!って連想する方は、まだまだ多いと思いますが、公開事例の通り、食品小売業、不動産仲介、生命保険、というように、CADとかデザインといったグラフィックス全開なイメージと離れた業界において、OA用途のVDI環境で、NVIDIA 仮想GPUソリューションの採用は広がっているんです。章タイトル「”Windows10 VDI は GPUが必要”って言ってるけど~」の通り、GPU採用は、Windows10へのバージョンアップ移行がきっかけです。過去のセキュリティ偏重のVDIは「遅くてもセキュリティのため我慢して使う」ということが多かったですが、昨今は、「遅さを我慢するVDIでは生産性が下がり期待する働き方改革につながらない」から、テレワークによる生産性向上を支えるVDIを実現できるストレスフリーなVDIを選択するユーザーが増えています。

Windows10は、システム要件に記載の通り、ハードウェアアクセラレーション(GPU)前提です。実際、下図の通り、そのOS上で動作するアプリケーションのGPU利用はどんどん増加傾向にあり、その Windows 10をVDI環境で利用する際は、増え続けるグラフィックスのニーズを受け止めるために仮想GPUが無いと、ストレスフルなVDIになってしまうというわけです。このあたりのネタは、昨年も紹介しましたが、2017年、2018年、2019年と、毎年のFeature Updateを通して、予想通りの進化しています。



  GPU アクセラレーションを活⽤したユーザーエクスペリエンスの向上(Windows 10 の環境と Windows 7 の環境を対象とした⽐較分析)





2) NVIDIAの仮想GPUソリューションを活用する新たなワークロード

この章では、従来のCADワークステーション用途、そして、広がるOA用途に続き、新たに注目される用途・ニーズについて、紹介したいと思います。仮想GPUテクノロジーを提供するNVIDIAは、2019年夏にサンフランシスコで開催されたVMworld2019のタイミングで、「vCOMPUTE SEVER」を発表しました。

この vCOMPUTE SERVERは、仮想GPUがGPUコンピューティング用途に対応した事を意味しており、もう少し具体的に挙げると、GPUを画像処理ではなく、


もちろん、大規模な気候シミュレーション、天体や衛星軌道のシミュレーションは、ひとつのGPUを分割するvGPUの方式ではなく、いわゆるスパコン(クラスタ)が活躍する世界です。ですが、世の中のシミュレーションや学習ワークロード全てが、大量のGPUを必要するわけではなく、ワークステーションやサーバーで計算するようなものは、仮想化して、GPUを複数の仮想マシンで共有する環境に移行することで、コスト的に導入しやすく、また、その時々に必要なGPUパワーを分割して割り当てることで、効率的な利用が可能(みんなが使いやすい計算環境)になります。さらに、VMware Horizonの画面転送を併用することにより、計算結果(大容量データ)を端末にダウンロードの時間を待たず、データセンター内に置いたまま、可視化やデータ処理を行うことが可能です(みんなが使いやすい計算環境になります・再)。


ちなみに、、、計算系に詳しい方は、もしかしたら、「仮想GPUって、ECCやメモリページリタイアメントに対応してなくて信頼性が低いんじゃないの?」と思うかもしれません。ECC(Error-correcting code)は、内部データ破損を検出し修正する機能で、Page Retirementは、エラー印を付けたメモリセルのページうぃ自動的にオフラインにする機能です。旧バージョンのNVIDIA vGPU ソフトウェアは、このECCやPage Retirementは非サポートでしたが、Ver.9 (R430)から、サポートになっていますので、ご安心ください。


・NVIDIA vGPU Software Documentation (R430 for VMware vSphere Release Notes)




最後に、自分の好きな「i am ai」の動画で締めたいと思います!



MakoinさんのHorizonネタの投稿「Windows 10 からのアプリケーション公開」は待望の機能だったんで、とても役に立つ内容かと!興味ある方はぜひ!


そして、vExperts Advent Calendar 2019 22日目は yukiさんです。よろしくお願いしまっす!


vEXPERT2019 山辺 (twitter @virtapp_life


vEXPERTについては →




For updates on this blog and other blogs, follow me on Twitter: @SteveIDM


We mostly talk about SAML with Workspace ONE but i'm asked occasionally if Workspace ONE Access can support OpenID Connect? The answer is yes, of course it can.  Just keep in mind before you start to configure OpenID Connect, Workspace ONE Access only supports the email, profile and user scopes.There is no support for custom scopes nor the ability to modify the attributes that are returned in the provided scopes.


Workspace ONE Access supports the Authorization Code Grant as well as Client Credentials.


Lets walk through the process to setup an OIDC Application. We are going to use the OpenID Debugger application from Auth0.


Create the SAAS Application


  1. In the Workspace ONE Administration Console, go to Catalog -> Webapps
  2. Click New
  3. Provide a Name: ie. OpenID TestApp
    Screen Shot 12-18-19 at 02.33 AM.PNG
  4. Click Next
  5. Select OpenID Connect from the Drop List
    Screen Shot 12-18-19 at 02.33 AM 001.PNG
  6. Complete the fields as per your application requirements.  The following is a sample for Auth 0 Client Connect App.
    Target URL
    This is just a web link to the target application
    Redirect URL
    If you need more than one redirect URL's you can add them later. Only one will be accepted here.
    Client ID
    Enter any Client ID that will be used in the calling application. Do Not Use Spaces or special characters.
    Client Secret
    Enter a secret that will be used by the calling application.
    Screen Shot 12-18-19 at 02.35 AM.PNG
  7. Click next
  8. Click Save
  9. Assign this application to your users.


Modify the Remote App Access Client

A remote app access client will automatically get created. We will need to modify this client.


  1. Go to Catalog -> Settings
  2. Click on Remote App Access
  3. In the Client List, look for the Client ID that was used in the earlier step. In my example, I used "MyOIDCTester"
  4. Click on the Client ID
  5. Under Scopes, Click Edit
  6. Select Email and Profile

    Note: This will remove the Admin scope. If you really need to keep the admin scope you will need to perform this step using the API.

  7. Click Save
  8. If you want to prompt the user to authorize the user grants, you will need to do the following steps: I will skip this step for now.
    1. Click Edit beside Client Configuration
    2. Select "Prompt Users for Access"


Testing with the Auth0 OpenID Connect Debugger

  1. Go to
    Screen Shot 12-18-19 at 03.33 AM.PNG

  2. Click on Configuration
    Discovery URL



    Authorization Token Endpoint



    Token Endpoint

    Token Keys Endpoint



    OIDC Client IDMyOIDCTester
    OIDC Client SecretThisIsMySecretKey
    Scopeemail profile user openid
    Screen Shot 12-18-19 at 04.15 AM.PNG
  3. Click Save
  4. Click Start
  5. When prompted to Authentication, select your domain based credentials (Do no use System Domain)
  6. If you selected "Prompt Users for Access" they will be prompted and required to Allow Access:
  7. You will now see your Authorization Code in the OIDC Debugger. Click Exchange to get your Access Token.
  8. You will now see your Bearer Token, ID Token and your Refresh Token.
  9. Click Next
  10. The ID Token will contain information regarding the identity. Click "View on JWT.IO" to see your JSON Tokens.
  11. You JWT Token will be displayed with your profile and user data:

Many people have confusion sight about how really a host assigns CPU resource to the virtual machines; More precisely I can say how the processing operation of a VM has been executed via the physical CPU resources. In the Intel terminology, the physical processor is a CPU socket, but in this post, I consider the pCPU as a physical core in the existing sockets of servers.

By default, each of the added vCPU to the VMs is assigned to one of the existing pCPUs. So if we configure 8 vCPU for a VM, there must exist at least 8 pCPU in the host. In other words, if there is not enough pCPU for the VM, it cannot be started.

Based on design, VMware ESXi can handle the CPU oversubscription (request of vCPU more than existing processors/pCPU). It means the pCPU~vCPU ratio is not one by one (1:1) anymore. In the vSphere environment, the ESXi host will handle the processing operations to execute requests of every VM, then the host needs to schedule processing time for each of them. But here the question is what ratio should be configured as the best settings? The answer depends on choosing Capacity or Performance aspects, really it can be very different based on the virtualized application requirements ...

Each VM needs the pCPU resources, then implementation of many VMs specially highly-applicable and resource-consumption virtual machines demand more CPU clocking. So if you provision more VMs and also increase the pCPU~vCPU Ratio (1:2, 1:4 or greater) the performance of the ESXi host will be affected.

As the VMware mentioned vSphere ESXi scheduling mechanism prefers to use the same vCPU-to-pCPU mapping to boost performance through CPU caching on the socket. If there is no specific documentary for the CPU design of the Application, you can set it up with a single vCPU, then scale up based on requires. So oversubscription will not have a serious negative impact.

Also, we must consider the CPU Ready Time is another important metric as the  CPU utilization metric is. Generally, vCPU~pCPU ratio is based on many factors like the following:

  1. Version of ESXi host. Each newer version supports more ratio.
  2. Supported features and technologies by physical processor.
  3. Workload rates of critical Applications that are implemented in the virtual environment.
  4. The capacity of existing processor resources in other members of the cluster and their current performance, especially when we require a higher level of hosts fault tolerance in the virtualization infrastructure. Available resources in the cluster will specify each VM that can be placed on which host in front of a host failure.


Should we use Hyperthreading or not ?!

Hyperthreading is a great technology that makes a single pCPU act as the two logical processors. In the case of the low-usage of ESXi host, each of those logical cores can handle two independent applications at the same time. So if you have 16 logical processors in the ESXi host, after enabling of HT (In both of the BIOS config and ESXi advanced settings) you will see the host has 32 logical processors. But using HT does not mean performance is increased always and it's highly dependent on application architecture. So in some cases maybe you encounter performance degradation via HT usage. Before enabling of HT in the ESXi hosts, review critical virtualized applications deploy on their VMs.

Source of original post in my personal blog: Undercity of Virtualization: Virtualization Tip1: Relation between physical CPU & virtual CPU

1 2 Previous Next


Looking for a blog?

Can't find a specific blog? Try using the Blog page to browse and search blogs.