All Posts

All, New lab manuals have been posted on the Documents page. These are the most up-to-date manuals available for all courses and will NOT match what is in Gilmore/Vitalsource. Please distribut... See more...
All, New lab manuals have been posted on the Documents page. These are the most up-to-date manuals available for all courses and will NOT match what is in Gilmore/Vitalsource. Please distribute these manuals to the students and advise them to use this instead of the one in Vitalsource. If your ILT class is provisioned via VLP (EMEA), the students should use that manual, as it is up-to-date and slightly different from the ILT versions available in the Community or the outdated ILT ones from Vitalsource. I know this is a bit confusing and somewhat annoying, but given our current processes, it is impossible to ensure that Vitalsource will have the most up-to-date book available. You should be periodically checking the communities for manual updates, as these are the best resources available for the students. Please share this information with your fellow VCIs, as it is clear from the document views that the entire community is not getting the message regarding these manuals.
本ブログは3回にわたって執筆している「vCenterとSDDC ManagerのDBをローカルPCにインポートする方法」の第三回目です。 前回と前々回の記事はこちらを参照してください。 vCenterとSDDC ManagerのDBをローカルPCにインポートする方法【DBダンプ取得方法】 vCenterとSDDC ManagerのDBをローカルPCにインポートする方法... See more...
本ブログは3回にわたって執筆している「vCenterとSDDC ManagerのDBをローカルPCにインポートする方法」の第三回目です。 前回と前々回の記事はこちらを参照してください。 vCenterとSDDC ManagerのDBをローカルPCにインポートする方法【DBダンプ取得方法】 vCenterとSDDC ManagerのDBをローカルPCにインポートする方法【PostgreSQLのインストール】 第三回の今回は一回目で取得したDBダンプを、二回目で構築したローカルPCのPostgreSQL DMBSにインポートします。 DB Dumpファイルの下処理 SDDC ManagerのDBダンプ(postgres-db-dump.gz)は、解凍してテキスト形式(postgres-db-dump)のファイルにさせしてしまえばそれでOKです。 それ自体で完結していますので特に何もする必要はありません。 問題はvCenterのDBダンプのほうです。vCenterのVCDBはTablespaceと呼ばれる機能で、DBの一部を別の場所に保存しています。 (この辺りは私の知識ではよくわかってませんが、、) 第一回の記事で、/storage/seat/vpostgres/* の部分をまとめてTarで固めて圧縮しましたが、その部分がTablespaceにあたります。 TableSpaceの場所はDB Dump内にフルパスで記載されてなければならず、VCSAから採取したDB Dumpには当然、/storage/seat/vpostgres/のままになっています。 ローカルのDMBSにインポートする前にこのパスを修正する必要があります。 1.vCenter DB Dumpの下準備その①【解凍する】 何はともあれまずは解凍しましょう。話はそれからです。 ローカルPCに転送したtgz形式のファイルを任意の解凍ツールで解凍してください。 以下のファイルが含まれているはずです。 vcdb_dump alarmtblspディレクトリ eventtblspディレクトリ hs1tblspディレクトリ hs2tblspディレクトリ hs3tblspディレクトリ hs4tblspディレクトリ tasktblspディレクトリ 2.vcdb_dumpファイルをテキストエディタで開いて編集する 次にvcdb_dumpファイルをテキストエディタで開きます。 ファイル自体がとても大きいため、Notepadではスペックが足りません。 ※大容量のテキストファイルを開けるエディタを利用してください。もしくは転送前のVCSAにてviなどで編集してから転送してください。(筆者は秀丸を利用してます) 編集するのは以下のTablespaceについての項目です。 見て分かる通り、TablespaceのロケーションのPathがVCSA内のロケーションにままです。 これを編集して、ローカルPCのロケーション(絶対パス)に書き直します。 以下は書き直した際の例です。書き直す際は実際に実施している環境のPathに置き換えてください。 書き換えが終わったらSaveしてDB ダンプ側の対処は終了です。 PostgreSQL側の下準備 次にPostgreSQL側の事前準備を説明します。 PostgreSQLにpgadminで接続する場合は特に準備は不要なのですが、psqlで接続する場合は事前に対処が必要となります。 psqlはPostgreSQL DBMSを起動した際にすでにpsqlで接続済みの状態となっていますが、このCLIは利用することができません。 まず、psqlのCLIからはDBをインポートできませんし、インポート後にいろいろやっているとすぐにエラーになって使えないことがわかります。 エラーになる理由は私にはわかりませんが、とりあえず、別途psqlを別途起動し、改めて接続すれば問題ありません。 しかしながら改めて接続した場合でも、いくつか問題が発生するため事前に下準備をする必要があります。 まずはPowershellを起動し、バッファサイズを最大にします。デフォルトのバッファサイズだと少なすぎて表示が切れてしまうからです。 次に、起動したPowershellで、PostgreSQL DBMSをインストールしたフォルダのどこかにあるpsql.exeを実行すれば、DBMSに接続できるのですが、このままだと文字コードの問題が発生します。 細かくは説明しませんが、日本語のWindows環境のデフォルトはShift JISなのに対し、DB Dumpの中身はUTF-8なので表示する際に問題が発生するのです。 なので、Powershellの表示文字コードとpsqlのデフォルトの文字コードを両方ともUTF-8にする必要があります。 何はともあれPowershellを起動して以下の2つのコマンドを実行すればOKです。 > chcp 65001 > $env:PGCLIENTENCODING = "UTF8" 1つ目のコマンドはPowershellの文字コードをUTF8にするためのものです。 2つ目のコマンドはpsqlの文字コードをUTF8にするためのものです。 このコマンドを実行したら、一度postgreSQL DBMSに接続してください。以下のコマンドで接続できます。 > psql.exe -U postgres 接続したら以下のコマンドをうってエンコードがUTF8になっていることを確認します。 postgres=# SHOW client_encoding; 一連の流れの例は以下です。 ※chcp 65001を実行した直後にPowershellがRefreshされるため、このコマンドの実行行は表示されてません Active code page: 65001 PS C:\Users\administrator\Downloads\vcdb_dump> $env:PGCLIENTENCODING = "UTF8" PS C:\Users\administrator\Downloads\vcdb_dump> PS C:\Users\administrator\Downloads\vcdb_dump> PS C:\Users\administrator\Downloads\vcdb_dump> PS C:\Users\administrator\Downloads\vcdb_dump> C:\Users\administrator\Downloads\PostgreSQLPortable\App\PgSQL\bin\psql.exe -U postgres psql (10.4) WARNING: Console code page (65001) differs from Windows code page (932)          8-bit characters might not work correctly. See psql reference          page "Notes for Windows users" for details. Type "help" for help. postgres=# SHOW client_encoding; client_encoding ----------------- UTF8 (1 row) postgres=# DB Dumpのインポートを実行 ここまで確認出来たらいったんpsqlからExitします。Exit方法は\qです。 Exitしたら以下のコマンドで、postgreSQL DBMSにDBをインポートします。 > psql.exe -U postgres -f .\vcdb_dump ※-f オプションでdb dumpのファイルを指定しています。インポートするdb dumpは一つだけを想定しています。別のdb dumpをインポートしたい場合は後述の手順でいったんDBをResetし、再度イニシャライズしてからインポートします。 インポート中にエラーが出ないことを確認してください。Tablespaceのパスが間違っていた場合は、エラーが表示されます。 ※「role "postgres" already exsits 」 というエラーが出るのは問題ありません。それ以外のエラーがでたら内容を確認してください。 問題なく完了したら改めて、> psql.exe -U postgres で接続してください。 \l (¥マークと小文字のエル)でDatabaseの一覧を表示すると、インポートされたデータベースが表示されていると思います。 pgadminのGUIからも同様に確認できるはずです。(自動で更新されない場合はrefreshして下さい。) DMBSの再イニシャライズ方法(ポータブル版のみ) 上記まででDB ダンプのインポートは完了なのですが、別の環境のDBをインポートしようとすると名前がかぶったりしてしまうのでうまくインポートできないはずです。 個別に削除してもいいのですが、ゴミがすべて手動で消すのは面倒なので、再度イニシャライズをしてしまうのが良いです。 ポータブル版の場合、再度イニシャライズするのは非常に簡単です。 ポータブル版のPostgreSQLをインストールしたフォルダにDataというフォルダがあると思います。 Dataフォルダ配下にさらにdata というフォルダがあります。 このdata というフォルダを丸ごと削除して、DMBSを再度起動すればOKです。 まずは既存のPosgreSQL DMBSを終了します。終了方法は起動した際のコマンドプロンプトのpsqlを\qでExitすればOKです。 次の上記のdataフォルダを丸ごと消してください。 再度PostgreSQLを起動すれば、DBがイニシャライズされ初期状態になります。(それまでに書き込んだ処理はすべて消えます。) これでDBMSが初期状態に戻ってますので、きれいな状態で再度あたらしいDB Dumpをインポート可能です。 いかがでしたでしょうか。 三回に分けて紹介したDB Dumpのインポートはこれで終了です。 実際には閲覧をするためにはpsqlの使い方やSQL文の知識が必要になりますが、この記事ではカバーしません。 Internet上にわかりやすい記事があると思うので探してみてください。
前回は、vSphere with Kubernetes のラボ環境で、Tanzu Kubernetes Cluster を作成しました。 前回はこちら。 vSphere with Kubernetes ラボ環境構築。Part-13: Supervisor Namespace での Tanzu Kubernetes Cluster 作成編 今回は、作成した Tanzu Kub... See more...
前回は、vSphere with Kubernetes のラボ環境で、Tanzu Kubernetes Cluster を作成しました。 前回はこちら。 vSphere with Kubernetes ラボ環境構築。Part-13: Supervisor Namespace での Tanzu Kubernetes Cluster 作成編 今回は、作成した Tanzu Kubernetes Cluster に接続して、Pod を起動してみます。 Tanzu Kubernetes Cluster は作成ずみです。 ※なお、前回の手順で作成していますが、再作成しているのでノードの名前が異なります。 Tanzu Kubernetes Cluster への接続。 kubectl で、Tanzu Kubernetes Cluster に接続します。 vSphere 専用の kubectl の準備については、下記の投稿のようにしています。 vSphere with Kubernetes ラボ環境構築。Part-11: kubectl で vSphere Pod 起動編 login では、Supervisor Cluster に接続する場合のコマンドラインに、 追加で下記のオプションを指定しています。 --tanzu-kubernetes-cluster-namespace=lab-ns-02 --tanzu-kubernetes-cluster-name=tkg-cluster-01 Tanzu Kubernetes Cluster の名前は、名前空間ごとに一意になるので、 「--tanzu-kubernetes-cluster-name」だけでなく、 名前空間の名前「--tanzu-kubernetes-cluster-namespace」も指定する必要があります。 $ kubectl vsphere login --insecure-skip-tls-verify --server=192.168.70.33 --tanzu-kubernetes-cluster-namespace=lab-ns-02 --tanzu-kubernetes-cluster-name=tkg-cluster-01 Username: administrator@vsphere.local Password: ★パスワード入力 Logged in successfully. You have access to the following contexts:    192.168.70.33    lab-ns-01    lab-ns-02    tkg-cluster-01 If the context you wish to use is not in this list, you may need to try logging in again later, or contact your cluster administrator. To change context, use `kubectl config use-context <workload name>` $ Kubernetes クラスタを構成するノードの確認。 context  を明示的に指定して get nodes でノードの確認をしてみます。 Tanzu Kubernetes Cluster では、下記のように、デプロイされた VM によって、 Supervisor Cluster とは別の Kubernetes クラスタが構成されていることが分かります。 $ kubectl --context tkg-cluster-01 get nodes NAME                                            STATUS   ROLES    AGE   VERSION tkg-cluster-01-control-plane-5w6vn              Ready    master   13h   v1.16.8+vmware.1 tkg-cluster-01-control-plane-p89lb              Ready    master   12h   v1.16.8+vmware.1 tkg-cluster-01-control-plane-phd6l              Ready    master   12h   v1.16.8+vmware.1 tkg-cluster-01-workers-l6qtc-586578bd88-5d7kh   Ready    <none>   12h   v1.16.8+vmware.1 tkg-cluster-01-workers-l6qtc-586578bd88-czrpg   Ready    <none>   12h   v1.16.8+vmware.1 tkg-cluster-01-workers-l6qtc-586578bd88-vk6f8   Ready    <none>   12h   v1.16.8+vmware.1 あらためて、Tanzu Kubernetes Cluster(tkg-cluster-01)の外側の context(lab-ns-02 名前空間と同名)を指定して、 Kubernetes ノードを確認しておきます。 こちらは、ESXi と Supervisor Control Plane VM で、Kubernetes のクラスタが構成されています。 $ kubectl --context lab-ns-02 get nodes NAME                               STATUS   ROLES    AGE   VERSION 422c6912a62eabbf0a45c417405308c9   Ready    master   67d   v1.17.4-2+a00aae1e6a4a69 422c9d9222c60e5328cdc12a543c099a   Ready    master   67d   v1.17.4-2+a00aae1e6a4a69 422cfbc654627c47880a2ec7ae144424   Ready    master   67d   v1.17.4-2+a00aae1e6a4a69 lab-wcp-esxi-031.go-lab.jp         Ready    agent    67d   v1.17.4-sph-091e39b lab-wcp-esxi-032.go-lab.jp         Ready    agent    67d   v1.17.4-sph-091e39b lab-wcp-esxi-033.go-lab.jp         Ready    agent    67d   v1.17.4-sph-091e39b Tanzu Kubernetes Cluster での Pod 起動。 以前に vSphere Pod を起動した YAML ファイルで、 tkg-cluster-01 上に Pod を起動してみます。 YAML ファイルの内容は、下記のようになっています。 $ cat nginx-pod.yml --- kind: Pod apiVersion: v1 metadata:   name: nginx-pod   labels:     app: wcp-demo spec:   containers:   - image: nginx     name: nginx-container kubectl で操作する context を tkg-cluster-01 に切り替えます。 $ kubectl config use-context tkg-cluster-01 Switched to context "tkg-cluster-01". それでは、Pod を起動します。 $ kubectl apply -f nginx-pod.yml pod/nginx-pod created tkg-cluster-01 クラスタのノード(tkg-cluster-01-workers-~)で起動されました。 $ kubectl get pods -o wide NAME        READY   STATUS    RESTARTS   AGE   IP            NODE                                            NOMINATED NODE   READINESS GATES nginx-pod   1/1     Running   0          40s   192.0.175.2   tkg-cluster-01-workers-l6qtc-586578bd88-vk6f8   <none>           <none> 一方、vSphere Client で確認すると、(vSphere Pod の場合ような)Pod の確認はできません。 vSphere Pod の場合には、インベントリや下記の赤枠のあたりで、起動された Pod が表示されていました。 しかし、Tanzu Kubernetes Cluster の Pod(そして他の Kubernete リソースも)は、 この「コア Kubernetes」の画面には表示されません。 これは、Pod が VM(この環境では、「tkg-cluster-01-workers-~」)の内部で起動されているためです。 vSphere Pod は、Pod が特殊な VM として作成されるので vSphere Client での視認性がよかったのですが、 Tanzu Kubernetes Cluster では、よくある「Docker をインストールした VM」と同様、 Pod がコンテナホストになる VM の内部で起動されるので、 特に「vSphere ならではの見やすさ」はありません。 そこで、リソースの確認には Octant(https://github.com/vmware-tanzu/octant)などの Kubernetes 可視化ツールがあると便利です。 まだ続きそう。 vSphere with Kubernetes ラボ環境構築。Part-15: Tanzu Kubernetes Cluster での PSP 使用 / Deployment 作成編
vCenterとSDDC Managerの内部管理用PostgreSQL DBMSのDB Dumpを、ローカルのPostgreSQL DBMSにインポートする手順の第二回です。 本記事は以下の記事の続きに当たります vCenterとSDDC ManagerのDBをローカルPCにインポートする方法【DBダンプ取得方法】 また次回の記事は以下です。 vCenterとSDDC... See more...
vCenterとSDDC Managerの内部管理用PostgreSQL DBMSのDB Dumpを、ローカルのPostgreSQL DBMSにインポートする手順の第二回です。 本記事は以下の記事の続きに当たります vCenterとSDDC ManagerのDBをローカルPCにインポートする方法【DBダンプ取得方法】 また次回の記事は以下です。 vCenterとSDDC ManagerのDBをローカルPCにインポートする方法【DBをインポートして閲覧する】 第一回の記事では、インポートするDBのDump取得を説明しました。 今回の記事ではローカルPC(Windows PC)にPostgreSQLをInstallする方法を説明します。 PostgreSQLをWindows PCにどのように構築するか Install自体は難しい作業ではありません。ググればいくらでも手順紹介記事が出てくると思いますし、Installerの入手もたやすいです。 ただし、目的ではDBの一般的な使い方とは異なり、あくまでもDumpされたDBの静止点を閲覧するためだけの用途になります。 そのため、ふつうにInstallするよりも、簡単に削除やResetや再構築が可能な環境となることが望ましいです。 PostgreSQLに限らず、アプリケーションを普通にIntallしてしまうと、レジストリや環境変数などがいじられてしまい、アンインストールしたとしても残滓が完全に消しきれない場合などもあります。なによりInstallすること自体にちょっと懸念がある場合も考えられます。 そこで、今回はPostgreSQLのポータブル版を利用することにしました。 ポータブル版であればレジストリや環境変数などに依存せず、インストール場所を指定し、実行ファイルを実行するだけで利用可能になります。 USBメモリなどにインストールして、持ち運び可能なDBにすることもできますし、削除やResetがしたいくなった場合はInstall フォルダを削除するだけでOKです。 もちろん、通常インストーラでインストールしても問題ありません。 この記事では通常のインストーラでのインストール方法は説明しませんので、そちらを実施される場合はググっていただければ記事がたくさん出てくると思います。 通常手順でInstallされる場合は、以下の記事の内容は必要ありませんので、第三回にスキップしてください。 vCenterとSDDC ManagerのDBをローカルPCにインポートする方法【DBをインポートして閲覧する】 ポータブル版のPostgreSQLをダウンロードする 本記事ではあえてダウンロード先のURLを示しません。 https://www.postgresql.org/download/ ではポータブル版が公開されていなかったからです。 ググればPortable 版のPostgreSQLのダウンロードリンクが容易に見つかるとは思いますが、安全性については各自の責任でお願いいたします。 またダウンロードするPostgreSQLのVersionについては、目的とする環境と同じか、最も近いものが良いと思います。 ポータブル版をInstallする ダウンロードする際にZipやtar.gzなどもあると思いますが、paf.exe 形式のものがあればこれを利用するのが一番容易だと思います。 こちらの実行ファイルをダウンロードして実行すればWizardが現れますので、ダウンロード先のフォルダを選択すれば解凍してすぐに使えるようになります。 ポータブル版を起動する ポータブル版のインストールしたフォルダに実行ファイルがありますのでそちらを起動してください。 起動すると自動的にDBのイニシャライズが開始されて、 Initialising database for first use, please wait... といったメッセージがひょうじされます。数分とたたずに利用可能な状態になります。 以下のようなメッセージが出てpostgres=#と出れば完了しています。 'postgres' connected on port 5432. Type \q to quit and shutdown the server. psql (10.4) WARNING: Console code page (1252) differs from Windows code page (932)          8-bit characters might not work correctly. See psql reference          page "Notes for Windows users" for details. Type "help" for help. postgres=# 試しに \l と入力して、イニシャライズ直後のDBのリストを見てみましょう。 ※\l は¥マーク(バックスラッシュ)と、アルファベット小文字のL(エル)です。 postgres=# \l                              List of databases    Name    |  Owner   | Encoding | Collate | Ctype |   Access privileges -----------+----------+----------+---------+-------+----------------------- postgres  | postgres | UTF8     | C       | C     | template0 | postgres | UTF8     | C       | C     | =c/postgres          +            |          |          |         |       | postgres=CTc/postgres template1 | postgres | UTF8     | C       | C     | =c/postgres          +            |          |          |         |       | postgres=CTc/postgres (3 rows) postgres=# こんな感じになっているはずです。 DB管理ツールをInstallする 通常PostgreSQLをInstallするとpgadminという管理用GUIも一緒にInstallされますがポータブル版にはpsqlのコマンドラインしかInstallされません。 そのため、別途pgadminをInstallすることをお勧めします。 pgadminはGUIでDBを管理したり閲覧したりできるツールなのでSQL文が苦手な人だけでなく、DBの構造を俯瞰したり、ちょろっとDB内の値をいじりたい場合などに便利です。 pgadminは以下からダウンロード可能です。 https://www.pgadmin.org/ ↑のサイトのダウンロードページからWindows用のInstallerを選んでInstallすればOKです。特に難しい点はないので一直線の作業です。 pgadminを起動してDBに接続する インストールしたpgadminを起動すると、ブラウザが起動して自動的にローカルのpgadminのプロセスに接続されます。 ↑のような画面がでれば成功です。 次に、左側の枠内のあるServersのところを右クリックしてCreate > Server...を選びましょう 設定ウィザードが出るので、General タブのNameの項目に任意の名前を入れましょう。 次にConnectionタブのHostのところにlocalhostと入力して、残りはすべてデフォルトのままSaveボタンを押してください。 Saveが完了すると左側の枠に表示が現れます。 展開するとDB内の構造や中身の情報を見ることができます。 ためしにDatabaseを一つ追加してみましょう。 Databasesのところを右クリックしてCreate->Database....を選んでください。 GeneralタブのDatabaseに任意の名前を入れて、Saveして下さい。 Saveが完了すると新しいDatabaseが現れます。 PSQLのコマンドでも作成したDatabaseを確認してみましょう。 先ほどの出力と比べるとtestdbがリストに追加されたことがわかります。 pgadminでは、DBに対する様々な操作が可能ですが、その他の操作や閲覧方法については、以下の外部記事がとても分かりやすかったのでこちらをご参考にしていただければと思います。 https://itsakura.com/pgadmin4-db-create pgadminの停止方法 pgadminはちょくちょく動作がおかしくなることがあります。 そういう場合はpgadminを再起動すればよいです。 タスクトレイにpgadminの象さんのマークがあるので、それを右クリックしてShutdown Serverを選んで停止し、改めて起動してください。 今回はPostgreSQL DBMSのInstallとGUI管理ツールであるpgadminを紹介しました。 次回では実際にDumpしたDBをインポートして、内容を確認する作業を説明します。 vCenterとSDDC ManagerのDBをローカルPCにインポートする方法【DBをインポートして閲覧する】
When using the VMXNET3 driver on ESXi 4.x, 5.x, 6.x, you see significant packet loss during periods of very high traffic bursts Cause This issue occurs when packets are dropped during high traf... See more...
When using the VMXNET3 driver on ESXi 4.x, 5.x, 6.x, you see significant packet loss during periods of very high traffic bursts Cause This issue occurs when packets are dropped during high traffic bursts. This can occur due to a lack of receive and transmit buffer space or when receive traffic which is speed-constrained. For example, with a traffic filter. To resolve this issue, ensure that there is no traffic filtering occurring (for example, with a mail filter). After eliminating this possibility, slowly increase the number of buffers in the guest operating system. To reduce burst traffic drops in Windows Buffer Settings: Click Start > Control Panel > Device Manager. Right-click vmxnet3 and click Properties. Click the Advanced tab. Click Small Rx Buffers and increase the value. The maximum value is 8192. Click Rx Ring #1 Size and increase the value. The maximum value is 4096 Note:- These changes will happen on the fly, so no reboot is required. However, any application sensitive to TCP session disruption can likely fail and have to be restarted. This applies to RDP, so it is better to do this work in a console window. This issue is seen in the Windows guest operating system with a VMXNET3 vNIC. It can occur with versions besides 2008 R2. It is important to increase the value of Small Rx Buffers and Rx Ring #1 gradually to avoid drastically increasing the memory overhead on the host and possibly causing performance issues if resources are close to capacity. If this issue occurs on only 2-3 virtual machines, set the value of Small Rx Buffers and Rx Ring #1 to the maximum value. Monitor virtual machine performance to see if this resolves the issue. The Small Rx Buffers and Rx Ring #1 variables affect non-jumbo frame traffic only on the adapter.
Dear readers Welcome to this new blog post talking about static routing with the NSX-T Tier-0 Gateway. The majority of our customers are using BGP for the Tier-0 Gateway to Top of Rack (ToR) swi... See more...
Dear readers Welcome to this new blog post talking about static routing with the NSX-T Tier-0 Gateway. The majority of our customers are using BGP for the Tier-0 Gateway to Top of Rack (ToR) switches connectivity to exchange IP prefixes. For those customers who prefer static routing, this blog post talks about the two design options. Design Option 1: Static Routing using SVI as Next Hop with NSX-T Edge Node in Active/Active Mode to support ECMP for North/South Design Option 2: Static Routing using SVI as Next Hop with NSX-T Edge Node in Active/Standby Mode using HA VIP I have the impression that the second design option with a Tier-0 Gateway with two NSX-T Edge Node in Active/Standby mode using HA VIP is widely known, but the first option with NSX-T Edge Node in Active/Active mode leveraging ECMP with static routing is pretty unknown. This first option is for example also a valid Enterprise PKS (new name is Tanzu Kubernetes Grid Integration - TKGI) design option (with shared Tier-1 Gateway) or can be used with vSphere 7 with Kubernetes (Project Pacific) as well where BGP is not allowed nor preferred. I am sure the reader is aware, that Tier-0 Gateway in Active/Active mode cannot be enabled for stateful services (e.g. Edge firewall). Before we start to configure these two different design options, we need to describe the overall lab topology, the physical and logical setup along with the NSX-T Edge Node setup including the NSX-T Edge Node main installation steps. For both options we will configure only a single N-VDS on the NSX-T Edge Node. This is not a requirement, but it is considered a pretty simple design option. The other popular design options consist of typically three embedded N-VDS on the NSX-T Edge Node for design option 1 and two embedded N-VDS on the NSX-T Edge Node for design option 2. Logical Lab Topology The lab setup is pretty simple. For an easy comparison between those two options, I have configured both design options in parallel. The most relevant part for this blog post is between the two Tier-0 Gateways and the two ToR switches acting as Layer 3 Leaf switches. The configuration and design for the Tier-1 Gateway and the compute vSphere cluster hosting the eight workload Ubuntu VMs is identially for both design options. There is only a single Tier-1 Gateway per Tier-0 Gateway configured, each with two overlay segments. The eight workload Ubuntu VMs are installed on different Compute vSphere cluster called NY-CLUSTER-COMPUTE1 with only two ESXi hosts and are evenly distributed on the two ESXi hosts. Those two compute ESXi hosts are prepared with NSX-T and have only a single overlay Transport Zone configured. The four NSX-T Edge Node VMs are running on another vSphere cluster, called NY-CLUSTER-EDGE1. This vSphere cluster has again only two ESXi hosts. A third vSphere cluster called NY-CLUSTER-MGMT is used for the management component, like vCenter and the NSX-T managers. Details about the compute and management vSphere clusters are not relevant for this blog post and hence are deliberately omitted. The diagram below shows the NSX-T logical topology, the most relevant vSphere objects and underneath the NSX-T overlay and VLAN segments (for the NSX-T Edge Node North/South connectivity. Physical Setup Lets have first a look at physical setup used for our four NSX-T VM-based Edge Nodes. Understanding the physical is no less important than the logical setup. Two Nexus 3048 ToR switches configured as Layer 3 Leaf switches are used. They have a Layer 3 connection towards a single spine (not shown) and two Layer 2 trunks combined into a single portchannel with LACP between the two ToR switches. Two ESXi hosts (ny-esx50a and ny-esx51a) with 4 pNICs in total assigned to two different virtual Distributed Switches (vDS). Please note, the Nexus 3048 switches are not configured with Cisco vPC, even this would also be a valid option. The relevant physical links for the NSX-T Edge Nodes connectivity are the four green links only connected to vDS2. Those two ESXi hosts (ny-esx50a and ny-esx51a) are NOT prepared. The two ESXi hosts belong to a single vSphere Cluster exclusively used for NSX-T Edge Node VMs. There are a few good reasons NOT to prepare those ESXi hosts with NSX-T where you host only NSX-T Edge Node VMs: It is not required Better NSX-T upgrade-ability (you don't need to evacuate the NSX-T VM-based Edge Nodes during host NSX-T software upgrade with vMotion to enter maintenance mode; every vMotion of the NSX-T VM-based Edge Node will cause a short unnecessary data plane glitch) Shorter NSX-T upgrade cycles (for every NSX-T upgrade you only need to upgrade the ESXi hosts which are used for the payload VMs and only the NSX-T VM-based Edge Nodes, but not the ESXi hosts where you have your Edge Nodes deployed vSphere HA can be turned off (do we want to move a highly loaded packet forwarding node like an NSX-T Edge Node with vMotion in a host vSphere HA event? No I don't think so - as the routing HA model react in a failure event faster) Simplified DRS settings (do we want to move an NSX-T VM-based Edge Node with vMotion to balance the resources?) Typically a resource pool is not required We should never underestimate how important smooth upgrade cycles are. Upgrade cycles are time consuming events and are typically required multiple times per year. To have the ESXi host NOT prepared for NSX-T is considered best practice and should always be deployed in any NSX-T deployments which can afford a dedicated vSphere Cluster only for NSX-T VM-based Edge Nodes. Install NSX-T on ESXi hosts where you have deployed your NSX-T VM-based Edge Nodes (called collapsed design) is valid too and appropriate for customers who have a low number of ESXi hosts to keep the CAPEX costs low. ESXi Host vSphere Networking The first virtual Distributed Switch (vDS1) is used for the host vmkernel networking only. The typical vmkernel interfaces are attached to three different port groups. The second virtual Distributed Switch (vDS2) is used for the NSX-T VM-based Edge Node networking only. All virtual Distributed Switches port groups are tagged with the appropriate VLAN id, with the exception of the three uplink trunk port groups (more details later). Both virtual Distributed Switches are configured for MTU 9000 bytes and I am using a different Geneve Tunnel End Point (TEP) VLAN for the Compute ESXi hosts (VLAN 150 for ny-esx70a and ny-esx71a) and for the two NSX-T VM-based Edge Node (VLAN 151) running on the ESXi hosts (ny-esx50a and ny-esx51a). In such a setup this is not a requirement, but helps to distribute the BUM traffic replication effort leveraging the hierarchical 2-Tier replication mode. The "dummy" port group is used to connect the unused NSX-T Edge Node fast path interface (fp-ethX); the attachment to a dummy port group is done to avoid that NSX-T reports it as interface admin status down. Table 1 - vDS Setup Overview Name Diagram vDS Name Physical Interfaces Port Groups vDS1 NY-vDS-ESX5x-EDGE1 vmnic0 and vmnic1 NY-vDS-PG-ESX5x-EDGE1-VMK0-Mgmt50 NY-vDS-PG-ESX5x-EDGE1-VMK1-vMotion51 NY-vDS-PG-ESX5x-EDGE1-VMK2-ipStorage52 vDS2 NY-vDS-ESX5x-EDGE2 vmnic2 and vmnic3 NY-vDS-PG-ESX5x-EDGE2-EDGE-Mgmt60 (Uplink 1 active, Uplink 2 standby) NY-vDS-PG-ESX5x-EDGE2-EDGE-TrunkA (Uplink 1 active, Uplink 2 unused) NY-vDS-PG-ESX5x-EDGE2-EDGE-TrunkB (Uplink 1 unused, Uplink 2 active) Ny-vDS-PG-ESX5x-EDGE2-EDGE-TrunkC (Uplink 1 active, Uplink 2 active) NY-vDS-PG-ESX5x-EDGE2-Dummy999 (Uplink 1 and Uplink 2 are unused) The combined diagram below shows the most relevant NY-vDS-ESX5x-EDGE2 port group settings regarding VLAN trunking and Teaming and Failover. Logical VLAN Setup The ToR switches are configured with those relevant four VLANs (60, 151,160 and 161) for the NSX-T Edge Nodes and the associated Switched Virtual Interfaces (SVI). The VLANs 151, 160 and 161 (VLAN 161 is not used in design option 2) are carried over the three vDS trunk port groups (NY-vDS-PG-ESX5x-EDGE2-EDGE-TrunkA, NY-vDS-PG-ESX5x-EDGE2-EDGE-TrunkB and NY-vDS-PG-ESX5x-EDGE2-EDGE-TrunkC). The SVI on the Nexus 3048 for Edge Management (VLAN 60) and for the Edge Node TEP (VLAN 151) are configured with HSRPv2 with a VIP of .254. The two SVIs on the Nexus 3048 for the Uplink VLAN (160 and 161) are configured without HSRP. VLAN999 as the dummy VLAN does not exists on the ToR switches. The Tier-1 Gateway is not shown in the diagrams below. Please note the dotted line to SVI161 respective SVI160 indicates that the VLAN/SVI configuration on the ToR switch exists, but is not used for the static routing when using Active/Active ECMP with static routing (design option 1). And the dotted line to SVI161 in design option 2 indicates that the VLAN/SVI configuration on the ToR switches exists, but is not used for the static routing when using Active/Standby with HA VIP with static routing. More details about the static routing is shown in a later step. NSX-T Edge Node Deployment The NSX-T Edge Node deployment option with the single Edge Node N-VDS is simple and has been discussed in one of my other blog posts. In this lab exercise I have done an NSX-T Edge Node ova installation, followed by the "join" command followed by the final step of the NSX-T Edge Transport Node configuration. The NSX-T UI installation option is valid as well, but my personal preference is the ova deployment option. The most relevant step for such a NSX-T Edge Node setup is the correct place of the dot1q tagging and the correct mapping of the NSX-T Edge Node interfaces to the virtual Distributed Switches (vDS2) trunk port groups (A & B for option 1 and C for option 2) as shown in the diagrams below. The diagram below shows the NSX-T Edge Node overall setup and the network selection for the NSX-T Edge Node 20 & 21 during the ova deployment for the design option 1: The diagram below shows the NSX-T Edge Node overall setup and the network selection for the NSX-T Edge Node 22 & 23 during the ova deployment for the design option 2: After the successful ova deployment the "join" command must be used to connect the management plane of the NSX-T Edge Nodes to the NSX-T managers. The "join" command requires the NSX-T manager thumbprint. Jump with SSH to the first NSX-T manager and read the API thumbprint. Jump via SSH to every ova deployed NSX-T Edge Node and execute the "join" command. The two steps are shown the in the table below: Table 2 - NSX-T Edge Node "join" to the NSX-T Managers Step Command Example Device Comments Read API Thumbprint ny-nsxt-manager-21> get certificate api thumbprint ea90e8cc7adb6d66994a9ecc0a930ad4bfd1d09f668a3857e252ee8f74ba1eb4 first NSX-T manager N/A Join the NSX-T Manager for each NSX-T Edge Node ny-nsxt-edge-node-20> join management-plane ny-nsxt-manager-21.corp.local thumbprint ea90e8cc7adb6d66994a9ecc0a930ad4bfd1d09f668a3857e252ee8f74ba1eb4 username admin Password for API user: Node successfully registered as Fabric Node: 437e2972-bc40-11ea-b89c-005056970bf2 ny-nsxt-edge-node-20> --- do the same for all other NSX-T Edge Nodes --- on all previous deployed NSX-T Edge Node through ova NSX-T will sync the configuration with the two other NSX-T managers Do not join using the NSX-T manager VIP FQDN/IP The resulting UI after the "join" command is shown below. The configuration state must be "Configure NSX". NSX-T Edge Transport Node Configuration Before we can start with the NSX-T Edge Transport Node configuration, we need to be sure, that the Uplink Profiles are ready. The two design options require two different Uplink Profiles. The two diagrams below shows the two different Uplink Profiles for the NSX-T Edge Transport Nodes: The Uplink Profile "NY-EDGE-UPLINK-PROFILE-SRC-ID-TEP-VLAN151" is used for design option 1 and is required for Multi-TEP with the teaming policy "LOADBALANCE_SRCID" with two Active Uplinks (EDGE-UPLINK01 and EDGE-UPLINK02). Two additional named teaming policies are configured for a proper ECMP dataplane forwarding; please see blog post "Single NSX-T Edge Node N-VDS with correct VLAN pinning" for more details. I am using the same named teaming configuration for design option 1 as in the other blog post where I have used BGP instead of static routing. As mentioned already, the dot1q tagging (Transport VLAN = 151) for the two TEP interfaces is required as part of this Uplink Profile configuration. The Uplink Profile "NY-EDGE-UPLINK-PROFILE-FAILOVER-TEP-VLAN151" is used for design option 2 and requires the teaming policy "FAILOVER_ORDER" with only a single Active Uplink (EDGE-UPLINK01). Named teaming policies are not required. Again the dot1q tagging for the single TEP interface (Transport VLAN = 151) is required as part of this Uplink Profile configuration. The NSX-T Edge Transport Node configuration itself is straightforward and is shown in the two diagrams below for a single NSX-T Edge Transport Node per design option. NSX-T Edge Transport Node 20 & 21 (design option 1) are using the previous configured Uplink Profile "NY-EDGE-UPLINK-PROFILE-SRC-ID-TEP-VLAN151". Two static TEP IP addresses are configured and the two Uplinks (EDGE-UPLINK01 & EDGE-UPLINK02) are mapped to the fast path interfaces (fp-eth0 & fp-eth1). NSX-T Edge Transport Node 22 & 23 (design option 2) are using the previous configured Uplink Profile "NY-EDGE-UPLINK-PROFILE-FAILOVER-TEP-VLAN151". A single static TEP IP address is configured and the single Uplink (EDGE-UPLINK01) is mapped to the fast path interface (fp-eth0). Please note, the required configuration of the two NSX-T Transport Zones and the single N-VDS switch is not shown. The NSX-T Edge Transport Node ny-nsxt-edge-node-20 and ny-nsxt-edge-node-21 are assigned to the NSX-T Edge cluster NY-NSXT-EDGE-CLUSTER01 and the NSX-T Edge Transport Node ny-nsxt-edge-node-22 and ny-nsxt-edge-node-22 are assigned to the NSX-T Edge cluster NY-NSXT-EDGE-CLUSTER02. This NSX-T Edge cluster configuration is also not shown. NSX-T Tier-0 Gateway Configuration The base NSX-T Tier-0 Gateway configuration is straightforward and is shown in the two diagrams below. The Tier-0 Gateway NY-T0-GATEWAY-01 (design option 1) is configured in Active/Active mode along with the association with the NSX-T Edge Cluster NY-NSXT-EDGE-CLUSTER01. The Tier-0 Gateway NY-T0-GATEWAY-02 (design option 2) is configured in Active/Standby mode along with the association with the NSX-T Edge Cluster NY-NSXT-EDGE-CLUSTER02. In this example preemptive is selected and the first NSX-T Edge Transport Node (ny-nsxt-edge-node-22) is the preferred Edge Transport Node (the active node when both nodes are up and running). The next step of Tier-0 Gateway configuration is about the Layer 3 interfaces (LIF) for the northbound connectivity towards the ToR switches. The next two diagrams show the IP topologies including the ToR switches IP configuration along the resulting NSX-T Tier-0 Gateway Layer 3 interface configuration for the design option 1 (A/A ECMP). The next diagrams show the IP topology including the ToR switches IP configuration along the resulting NSX-T Tier-0 Gateway interface configuration for the design option 2 (A/S HA VIP). The HA VIP configuration requires that both NSX-T Edge Transport Node interfaces belong to the same Layer 2 segment. Here I am using the previous configured Layer 3 interfaces (LIF); both belong to the same VLAN segment 160 (NY-T0-VLAN-SEGMENT-160). All the previous steps are probably known by the majority of the readers. However, the next step is about the static routing configuration; these steps highlights the relevant configurations to archive ECMP with two NSX-T Edge Transport Node in Active/Active mode. Design Option 1 Static Routing (A/A ECMP) The first step in design option 1 is the Tier-0 static route configuration for northbound traffic. The most common way is to configure default routes northbound. Two default routes each with a different Next Hop (172.16.160.254 and 172.16.161.254) are configured on the NY-T0-GATEWAY-01. This is the first step to achieve ECMP for northbound traffic towards the ToR switches. The diagram below shows the corresponding NSX-T Tier-0 Gateway static routing configuration. Please keep in mind, that at the NSX-T Edge Transport Node level, each Edge Transport Node will have two default route entries. This is shown in the table below. The difference between the logical construct configuration (Tier-0 Gateway) and the "physical" construct configuration (the Edge Transport Nodes) might already be known, as we have the same behavior with BGP. This approach limits configuration errors. With BGP we typically configure only two BGP peers towards the two ToR switches, but each NSX-T Edge Transport Nodes gets two BGP session realized. The diagram below shows the setup with the two default routes (in black) northbound. Please note, the configuration steps how to configure the Tier-1 Gateway (NY-T1-GATEWAY-GREEN) and how to connect it to the Tier-0 Gateway is not shown. Table 3 - NSX-T Edge Transport Node Routing Table for Design Option 1 (A/A ECMP) ny-nsxt-edge-node-20 (Service Router) ny-nsxt-edge-node-21 (Service Router) ny-nsxt-edge-node-20(tier0_sr)> get route 0.0.0.0/0 Flags: t0c - Tier0-Connected, t0s - Tier0-Static, b - BGP, t0n - Tier0-NAT, t1s - Tier1-Static, t1c - Tier1-Connected, t1n: Tier1-NAT, t1l: Tier1-LB VIP, t1ls: Tier1-LB SNAT, t1d: Tier1-DNS FORWARDER, t1ipsec: Tier1-IPSec, isr: Inter-SR, > - selected route, * - FIB route Total number of routes: 1 t0s> * 0.0.0.0/0 [1/0] via 172.16.160.254, uplink-307, 03:29:43 t0s> * 0.0.0.0/0 [1/0] via 172.16.161.254, uplink-309, 03:29:43 ny-nsxt-edge-node-20(tier0_sr)> ny-nsxt-edge-node-21(tier0_sr)> get route 0.0.0.0/0 Flags: t0c - Tier0-Connected, t0s - Tier0-Static, b - BGP, t0n - Tier0-NAT, t1s - Tier1-Static, t1c - Tier1-Connected, t1n: Tier1-NAT, t1l: Tier1-LB VIP, t1ls: Tier1-LB SNAT, t1d: Tier1-DNS FORWARDER, t1ipsec: Tier1-IPSec, isr: Inter-SR, > - selected route, * - FIB route Total number of routes: 1 t0s> * 0.0.0.0/0 [1/0] via 172.16.160.254, uplink-292, 03:30:42 t0s> * 0.0.0.0/0 [1/0] via 172.16.161.254, uplink-306, 03:30:42 ny-nsxt-edge-node-21(tier0_sr)> The second step is to configure static routing southbound from the ToR switches towards NSX-T Edge Transport Node. This step is required to achieve ECMP for southbound traffic. Each ToR switch is configured with four static routes in total to forward traffic to the destination overlay networks within NSX-T. We could easily see that each NSX-T Edge Transport Node is used twice as Next Hop for the static route entries. Table 4 - Nexus ToR Switches Static Routing Configuration and Resulting Routing Table for Design Option 1 (A/A ECMP) NY-N3K-LEAF-10 NY-N3K-LEAF-11 ip route 172.16.240.0/24 Vlan160 172.16.160.20 ip route 172.16.240.0/24 Vlan160 172.16.160.21 ip route 172.16.241.0/24 Vlan160 172.16.160.20 ip route 172.16.241.0/24 Vlan160 172.16.160.21 ip route 172.16.240.0/24 Vlan161 172.16.161.20 ip route 172.16.240.0/24 Vlan161 172.16.161.21 ip route 172.16.241.0/24 Vlan161 172.16.161.20 ip route 172.16.241.0/24 Vlan161 172.16.161.21 NY-N3K-LEAF-10# show ip route static IP Route Table for VRF "default" '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] '%<string>' in via output denotes VRF <string> 172.16.240.0/24, ubest/mbest: 2/0     *via 172.16.160.20, Vlan160, [1/0], 03:26:44, static     *via 172.16.160.21, Vlan160, [1/0], 03:26:58, static 172.16.241.0/24, ubest/mbest: 2/0     *via 172.16.160.20, Vlan160, [1/0], 03:26:44, static     *via 172.16.160.21, Vlan160, [1/0], 03:26:58, static ---snip--- NY-N3K-LEAF-10# NY-N3K-LEAF-11# show ip route static IP Route Table for VRF "default" '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] '%<string>' in via output denotes VRF <string> 172.16.240.0/24, ubest/mbest: 2/0     *via 172.16.161.20, Vlan161, [1/0], 03:27:39, static     *via 172.16.161.21, Vlan161, [1/0], 03:27:51, static 172.16.241.0/24, ubest/mbest: 2/0     *via 172.16.161.20, Vlan161, [1/0], 03:27:39, static     *via 172.16.161.21, Vlan161, [1/0], 03:27:51, static ---snip--- NY-N3K-LEAF-11# Again, these steps are straightforward and it shows how we can archive ECMP with static routing for North/South traffic. But what will happen, if for as example one of the two NSX-T Edge Transport Node is down? Lets assume, ny-nsxt-edge-node-20 is down. Traffic from the Spine switches will be forwarded still to both ToR switches and once the ECMP hash is calculated, the traffic is forwarded to one of the four Next Hops (the four Edge Transport Node Layer 3 interfaces). Based on the hash calculation, it could be Next Hop 172.16.160.20 or 172.16.161.20, both interfaces belong to ny-nsxt-edge-node-20. This traffic will be blackholed and dropped! But why do the ToR switches still announce these overlay networks 172.16.240.0/24 and 172.16.241.0/24 to the Spine switches? The reason is simple, because for both ToR switches the static route entries are still valid, as VLAN160/161 or/and the Next Hop are still UP. So from the ToR switch routing table perspective all is fine. These static route entries will potentially never go down, as the Next Hop IP addresses belong to the VLAN 160 or VLAN 161 and these VLANs are always in the state UP as long a single physical port is UP and part of one of these VLANs (assuming the ToR switch is up and running).  Even when all attached ESXi host are down, the InterSwitch link between the ToR switches is still UP and hence VLAN 160 and VLAN 161 are still UP.  Please keep in mind, with BGP this problem does not exists, as we have BGP keepalives and once the NSX-T Edge Transport Node is down, the ToR switch tears down the BGP session and invalidate the local route entries. But how could we solve the blackholing issue with static routing? The answer is Bi-Directional Forwarding (BFD) for static routing. What is BFD? BFD is nothing else then a purpose build keepalive protocol that typically routing protocols including first hop redundancy protocols (e.g. HSRP or VRRP) subscribe to. Various protocols can piggyback a single BFD session. BFD can detect link failures in milliseconds or sub-seconds (NSX-T Bare Metal Edge Nodes with 3 x 50ms) or near sub-seconds (NSX-T VM-based Edge Nodes 3 x 500ms) in the context of NSX-T. All protocols have some way of detecting failure, usually timer-related. Tuning these timers can theoretically get you sub-second failure detection too, but this produces unnecessary high overhead as theses protocols weren't designed with that in mind. BFD was specifically built for fast failure detection and maintain low CPU load. Please keep in mind, if you have as an example BGP running between two physical routers, there's no need to have BFD sessions for link failure detection, as the routing protocol will detect the link-down event instantly. But for two routers (e.g. Tier-0 Gateways) connected through intermediate Layer 2/3 nodes (physical infra, vDS, etc.) where the routing protocol cannot detect a link-down event, the failure event must be detected through a dead timer. Welcome to the virtual world!! BFD was enhanced with the capability to support static routing too, even the driver using BFD for static routing was not the benefit to keep the CPU low and have fast failure detection, it was about extension of the functionality of static routes with keepalives with BFD. So how can we apply BFD for static routing in our lab? There are multiple configuration steps required. Before we can associate BFD with the static routes on the NSX-T Tier-0 Gateway NY-T0-GATEWAY-01, the creation of a BFD profile for static routes is required. This is shown in the diagram below. I am using the same BFD parameter (Interval=500ms and Declare Dead Multiple=3) as NSX-T 3.0 has defined a default for BFD registered for BGP. The next step is the configuration of BFD peers for static routing at Tier-0 Gateway level. I am using the same Next Hop IP addresses (172.16.160.254 and 172.16.161.254) for the BFD peers as I have used for the static routes northbound towards the ToR switches. Again, this BFD peer configuration is configured at Tier-0 Gateway level, but the realization of the BFD peers happens at Edge Transport Node level. On each of the two NSX-T Edge Transport Nodes (Service Router) two BGP sessions are realized. The appropriate BFD peer source interface on the Tier-0 Gateway is automatically selected (the Layer 3 LIF) by NSX-T, but as you see, NSX-T allows you to specify the BFD source interface too. The table below shows the global BFD timer configuration and the BFD peers with source and peer (destination) IP. Table 5 - NSX-T Edge Transport Node BFD Configuration ny-nsxt-edge-node-20 (Service Router) ny-nsxt-edge-node-21 (Service Router) ny-nsxt-edge-node-20(tier0_sr)> get bfd-config Logical Router UUID           : 1cfd7da2-f37c-4108-8f19-7725822f0552 vrf            : 2 lr-id          : 8193 name           : SR-NY-T0-GATEWAY-01 type           : PLR-SR Global BFD configuration     Enabled        : True     Min RX Interval: 500     Min TX Interval: 500     Min RX TTL     : 255     Multiplier     : 3 Port               : 64a2e029-ad69-4ce1-a40e-def0956a9d2d Session BFD configuration    Source         : 172.16.160.20     Peer           : 172.16.160.254     Enabled        : True     Min RX Interval: 500     Min TX Interval: 500     Min RX TTL     : 255     Multiplier     : 3 Port               : 371a9b3f-d669-493a-a46b-161d3536b261 Session BFD configuration     Source         : 172.16.161.20     Peer           : 172.16.161.254     Enabled        : True     Min RX Interval: 500     Min TX Interval: 500     Min RX TTL     : 255     Multiplier     : 3 ny-nsxt-edge-node-20(tier0_sr)> ny-nsxt-edge-node-21(tier0_sr)> get bfd-config Logical Router UUID           : a2ea4cbc-c486-46a1-a663-c9c5815253af vrf            : 1 lr-id          : 8194 name           : SR-NY-T0-GATEWAY-01 type           : PLR-SR Global BFD configuration     Enabled        : True     Min RX Interval: 500     Min TX Interval: 500     Min RX TTL     : 255     Multiplier     : 3 Port               : a5454564-ef1c-4e30-922f-9876b9df38df Session BFD configuration    Source         : 172.16.160.21     Peer           : 172.16.160.254     Enabled        : True     Min RX Interval: 500     Min TX Interval: 500     Min RX TTL     : 255     Multiplier     : 3 Port               : 8423e83b-0a69-44f4-90d1-07d8ece4f55e Session BFD configuration    Source         : 172.16.161.21     Peer           : 172.16.161.254     Enabled        : True     Min RX Interval: 500     Min TX Interval: 500     Min RX TTL     : 255     Multiplier     : 3 ny-nsxt-edge-node-21(tier0_sr)> BFD in general and for static routing as wll requires that the peering site is configured with BFD too to ensure BFD keepalives are send out replied respectively. Once BFD peers are configured on the Tier-0 Gateway, the ToR switches require the appropriate BFD peer configuration too. This is shown in the table below. Each ToR switch gets two BFD peer configurations, one for each of the NSX-T Edge Transport Node. Table 6 - Nexus ToR Switches BFD for Static Routing Configuration NY-N3K-LEAF-10 NY-N3K-LEAF-11 feature bfd ! ip route static bfd Vlan160 172.16.160.20 ip route static bfd Vlan160 172.16.160.21 feature bfd ! ip route static bfd Vlan161 172.16.161.20 ip route static bfd Vlan161 172.16.161.21 Once both ends of the BFD peers are configured correctly, the BFD sessions should come up and the static route should be installed into the routing table. The table below shows the two BFD neighbors for the static routing (interface VLAN160 respective VLAN161). The BFD neighbor for interface Eth1/49 is used for the BFD peer towards the Spine switch and is registered for OSPF.  The NX-OS operating system does not mention "static routing" for the registered protocol, it shows "netstack" - reason unknown. Table 7 - Nexus ToR Switches BFD for Static Routing Configuration and Verification NY-N3K-LEAF-10/11 NY-N3K-LEAF-10# show bfd neighbors OurAddr         NeighAddr       LD/RD                 RH/RS           Holdown(mult)     State       Int                   Vrf                  172.16.160.254  172.16.160.20   1090519041/2635291218 Up              1099(3)           Up          Vlan160               default                       172.16.160.254  172.16.160.21   1090519042/3842218904 Up              1413(3)           Up          Vlan160               default                172.16.3.18     172.16.3.17     1090519043/1090519041 Up              5629(3)           Up          Eth1/49               default              NY-N3K-LEAF-10# NY-N3K-LEAF-11# show bfd neighbors OurAddr         NeighAddr       LD/RD                 RH/RS           Holdown(mult)     State       Int                   Vrf                  172.16.161.254  172.16.161.20   1090519041/591227029  Up              1384(3)           Up          Vlan161               default                       172.16.161.254  172.16.161.21   1090519042/2646176019 Up              1385(3)           Up          Vlan161               default               172.16.3.22     172.16.3.21     1090519043/1090519042 Up              4696(3)           Up          Eth1/49               default              NY-N3K-LEAF-11# NY-N3K-LEAF-10# show bfd neighbors details OurAddr         NeighAddr       LD/RD                 RH/RS           Holdown(mult)     State       Int                   Vrf                    172.16.160.254  172.16.160.20   1090519041/2635291218 Up              1151(3)           Up          Vlan160               default                         Session state is Up and not using echo function Local Diag: 0, Demand mode: 0, Poll bit: 0, Authentication: None MinTxInt: 500000 us, MinRxInt: 500000 us, Multiplier: 3 Received MinRxInt: 500000 us, Received Multiplier: 3 Holdown (hits): 1500 ms (0), Hello (hits): 500 ms (22759) Rx Count: 20115, Rx Interval (ms) min/max/avg: 83/1921/437 last: 348 ms ago Tx Count: 22759, Tx Interval (ms) min/max/avg: 386/386/386 last: 24 ms ago Registered protocols:  netstack Uptime: 0 days 2 hrs 26 mins 39 secs, Upcount: 1 Last packet: Version: 1                - Diagnostic: 0              State bit: Up             - Demand bit: 0              Poll bit: 0               - Final bit: 0              Multiplier: 3             - Length: 24              My Discr.: -1659676078    - Your Discr.: 1090519041              Min tx interval: 500000   - Min rx interval: 500000              Min Echo interval: 0      - Authentication bit: 0 Hosting LC: 1, Down reason: None, Reason not-hosted: None OurAddr         NeighAddr       LD/RD                 RH/RS           Holdown(mult)     State       Int                   Vrf                    172.16.160.254  172.16.160.21   1090519042/3842218904 Up              1260(3)           Up          Vlan160               default                         Session state is Up and not using echo function Local Diag: 0, Demand mode: 0, Poll bit: 0, Authentication: None MinTxInt: 500000 us, MinRxInt: 500000 us, Multiplier: 3 Received MinRxInt: 500000 us, Received Multiplier: 3 Holdown (hits): 1500 ms (0), Hello (hits): 500 ms (22774) Rx Count: 20105, Rx Interval (ms) min/max/avg: 0/1813/438 last: 239 ms ago Tx Count: 22774, Tx Interval (ms) min/max/avg: 386/386/386 last: 24 ms ago Registered protocols:  netstack Uptime: 0 days 2 hrs 26 mins 46 secs, Upcount: 1 Last packet: Version: 1                - Diagnostic: 0              State bit: Up             - Demand bit: 0              Poll bit: 0               - Final bit: 0              Multiplier: 3             - Length: 24              My Discr.: -452748392     - Your Discr.: 1090519042              Min tx interval: 500000   - Min rx interval: 500000              Min Echo interval: 0      - Authentication bit: 0 Hosting LC: 1, Down reason: None, Reason not-hosted: None OurAddr         NeighAddr       LD/RD                 RH/RS           Holdown(mult)     State       Int                   Vrf                    172.16.3.18     172.16.3.17     1090519043/1090519041 Up              5600(3)           Up          Eth1/49               default                Session state is Up and using echo function with 500 ms interval Local Diag: 0, Demand mode: 0, Poll bit: 0, Authentication: None MinTxInt: 500000 us, MinRxInt: 2000000 us, Multiplier: 3 Received MinRxInt: 2000000 us, Received Multiplier: 3 Holdown (hits): 6000 ms (0), Hello (hits): 2000 ms (5309) Rx Count: 5309, Rx Interval (ms) min/max/avg: 7/2101/1690 last: 399 ms ago Tx Count: 5309, Tx Interval (ms) min/max/avg: 1689/1689/1689 last: 249 ms ago Registered protocols:  ospf Uptime: 0 days 2 hrs 29 mins 29 secs, Upcount: 1 Last packet: Version: 1                - Diagnostic: 0              State bit: Up             - Demand bit: 0              Poll bit: 0               - Final bit: 0              Multiplier: 3             - Length: 24              My Discr.: 1090519041     - Your Discr.: 1090519043              Min tx interval: 500000   - Min rx interval: 2000000              Min Echo interval: 500000 - Authentication bit: 0 Hosting LC: 1, Down reason: None, Reason not-hosted: None NY-N3K-LEAF-10# NY-N3K-LEAF-11# show bfd neighbors details OurAddr         NeighAddr       LD/RD                 RH/RS           Holdown(mult)     State       Int                   Vrf                    172.16.161.254  172.16.161.20   1090519041/591227029  Up              1235(3)           Up          Vlan161               default                         Session state is Up and not using echo function Local Diag: 0, Demand mode: 0, Poll bit: 0, Authentication: None MinTxInt: 500000 us, MinRxInt: 500000 us, Multiplier: 3 Received MinRxInt: 500000 us, Received Multiplier: 3 Holdown (hits): 1500 ms (0), Hello (hits): 500 ms (22634) Rx Count: 19972, Rx Interval (ms) min/max/avg: 93/1659/438 last: 264 ms ago Tx Count: 22634, Tx Interval (ms) min/max/avg: 386/386/386 last: 127 ms ago Registered protocols:  netstack Uptime: 0 days 2 hrs 25 mins 47 secs, Upcount: 1 Last packet: Version: 1                - Diagnostic: 0              State bit: Up             - Demand bit: 0              Poll bit: 0               - Final bit: 0              Multiplier: 3             - Length: 24              My Discr.: 591227029      - Your Discr.: 1090519041              Min tx interval: 500000   - Min rx interval: 500000              Min Echo interval: 0      - Authentication bit: 0 Hosting LC: 1, Down reason: None, Reason not-hosted: None OurAddr         NeighAddr       LD/RD                 RH/RS           Holdown(mult)     State       Int                   Vrf                    172.16.161.254  172.16.161.21   1090519042/2646176019 Up              1162(3)           Up          Vlan161               default                         Session state is Up and not using echo function Local Diag: 0, Demand mode: 0, Poll bit: 0, Authentication: None MinTxInt: 500000 us, MinRxInt: 500000 us, Multiplier: 3 Received MinRxInt: 500000 us, Received Multiplier: 3 Holdown (hits): 1500 ms (0), Hello (hits): 500 ms (22652) Rx Count: 20004, Rx Interval (ms) min/max/avg: 278/1799/438 last: 337 ms ago Tx Count: 22652, Tx Interval (ms) min/max/avg: 386/386/386 last: 127 ms ago Registered protocols:  netstack Uptime: 0 days 2 hrs 25 mins 58 secs, Upcount: 1 Last packet: Version: 1                - Diagnostic: 0              State bit: Up             - Demand bit: 0              Poll bit: 0               - Final bit: 0              Multiplier: 3             - Length: 24              My Discr.: -1648791277    - Your Discr.: 1090519042              Min tx interval: 500000   - Min rx interval: 500000              Min Echo interval: 0      - Authentication bit: 0 Hosting LC: 1, Down reason: None, Reason not-hosted: None OurAddr         NeighAddr       LD/RD                 RH/RS           Holdown(mult)     State       Int                   Vrf                    172.16.3.22     172.16.3.21     1090519043/1090519042 Up              4370(3)           Up          Eth1/49               default                Session state is Up and using echo function with 500 ms interval Local Diag: 0, Demand mode: 0, Poll bit: 0, Authentication: None MinTxInt: 500000 us, MinRxInt: 2000000 us, Multiplier: 3 Received MinRxInt: 2000000 us, Received Multiplier: 3 Holdown (hits): 6000 ms (0), Hello (hits): 2000 ms (5236) Rx Count: 5236, Rx Interval (ms) min/max/avg: 553/1698/1690 last: 1629 ms ago Tx Count: 5236, Tx Interval (ms) min/max/avg: 1689/1689/1689 last: 1020 ms ago Registered protocols:  ospf Uptime: 0 days 2 hrs 27 mins 26 secs, Upcount: 1 Last packet: Version: 1                - Diagnostic: 0              State bit: Up             - Demand bit: 0              Poll bit: 0               - Final bit: 0              Multiplier: 3             - Length: 24              My Discr.: 1090519042     - Your Discr.: 1090519043              Min tx interval: 500000   - Min rx interval: 2000000              Min Echo interval: 500000 - Authentication bit: 0 Hosting LC: 1, Down reason: None, Reason not-hosted: None NY-N3K-LEAF-11# The table below shows the BFD session on the Tier-0 Gateway on the Service Router (SR). The CLI shows the BFD peers and source IP addresses along the state. Please note, BFD does not require that both end of the BFD peer are configured with an identically interval and multiplier value, but for troubleshooting reason are identically parameter recommended. Table 8 - NSX-T Edge Transport Node BFD Verification ny-nsxt-edge-node-20 (Service Router) ny-nsxt-edge-node-21 (Service Router) ny-nsxt-edge-node-20(tier0_sr)> get bfd-sessions BFD Session Dest_port                     : 3784 Diag                          : No Diagnostic Encap                         : vlan Forwarding                    : last true (current true) Interface                     : 64a2e029-ad69-4ce1-a40e-def0956a9d2d Keep-down                     : false Last_cp_diag                  : No Diagnostic Last_cp_rmt_diag              : No Diagnostic Last_cp_rmt_state             : up Last_cp_state                 : up Last_fwd_state                : UP Last_local_down_diag          : No Diagnostic Last_remote_down_diag         : No Diagnostic Last_up_time                  : 2020-07-07 15:42:23 Local_address                 : 172.16.160.20 Local_discr                   : 2635291218 Min_rx_ttl                    : 255 Multiplier                    : 3 Received_remote_diag          : No Diagnostic Received_remote_state         : up Remote_address                : 172.16.160.254 Remote_admin_down             : false Remote_diag                   : No Diagnostic Remote_discr                  : 1090519041 Remote_min_rx_interval        : 500 Remote_min_tx_interval        : 500 Remote_multiplier             : 3 Remote_state                  : up Router                        : 1cfd7da2-f37c-4108-8f19-7725822f0552 Router_down                   : false Rx_cfg_min                    : 500 Rx_interval                   : 500 Service-link                  : false Session_type                  : LR_PORT State                         : up Tx_cfg_min                    : 500 Tx_interval                   : 500 BFD Session Dest_port                     : 3784 Diag                          : No Diagnostic Encap                         : vlan Forwarding                    : last true (current true) Interface                     : 371a9b3f-d669-493a-a46b-161d3536b261 Keep-down                     : false Last_cp_diag                  : No Diagnostic Last_cp_rmt_diag              : No Diagnostic Last_cp_rmt_state             : up Last_cp_state                 : up Last_fwd_state                : UP Last_local_down_diag          : No Diagnostic Last_remote_down_diag         : No Diagnostic Last_up_time                  : 2020-07-07 15:42:24 Local_address                 : 172.16.161.20 Local_discr                   : 591227029 Min_rx_ttl                    : 255 Multiplier                    : 3 Received_remote_diag          : No Diagnostic Received_remote_state         : up Remote_address                : 172.16.161.254 Remote_admin_down             : false Remote_diag                   : No Diagnostic Remote_discr                  : 1090519041 Remote_min_rx_interval        : 500 Remote_min_tx_interval        : 500 Remote_multiplier             : 3 Remote_state                  : up Router                        : 1cfd7da2-f37c-4108-8f19-7725822f0552 Router_down                   : false Rx_cfg_min                    : 500 Rx_interval                   : 500 Service-link                  : false Session_type                  : LR_PORT State                         : up Tx_cfg_min                    : 500 Tx_interval                   : 500 ny-nsxt-edge-node-20(tier0_sr)> ny-nsxt-edge-node-21(tier0_sr)> get bfd-sessions BFD Session Dest_port                     : 3784 Diag                          : No Diagnostic Encap                         : vlan Forwarding                    : last true (current true) Interface                     : a5454564-ef1c-4e30-922f-9876b9df38df Keep-down                     : false Last_cp_diag                  : No Diagnostic Last_cp_rmt_diag              : No Diagnostic Last_cp_rmt_state             : up Last_cp_state                 : up Last_fwd_state                : UP Last_local_down_diag          : No Diagnostic Last_remote_down_diag         : No Diagnostic Last_up_time                  : 2020-07-07 15:42:15 Local_address                 : 172.16.160.21 Local_discr                   : 3842218904 Min_rx_ttl                    : 255 Multiplier                    : 3 Received_remote_diag          : No Diagnostic Received_remote_state         : up Remote_address                : 172.16.160.254 Remote_admin_down             : false Remote_diag                   : No Diagnostic Remote_discr                  : 1090519042 Remote_min_rx_interval        : 500 Remote_min_tx_interval        : 500 Remote_multiplier             : 3 Remote_state                  : up Router                        : a2ea4cbc-c486-46a1-a663-c9c5815253af Router_down                   : false Rx_cfg_min                    : 500 Rx_interval                   : 500 Service-link                  : false Session_type                  : LR_PORT State                         : up Tx_cfg_min                    : 500 Tx_interval                   : 500 BFD Session Dest_port                     : 3784 Diag                          : No Diagnostic Encap                         : vlan Forwarding                    : last true (current true) Interface                     : 8423e83b-0a69-44f4-90d1-07d8ece4f55e Keep-down                     : false Last_cp_diag                  : No Diagnostic Last_cp_rmt_diag              : No Diagnostic Last_cp_rmt_state             : up Last_cp_state                 : up Last_fwd_state                : UP Last_local_down_diag          : No Diagnostic Last_remote_down_diag         : No Diagnostic Last_up_time                  : 2020-07-07 15:42:15 Local_address                 : 172.16.161.21 Local_discr                   : 2646176019 Min_rx_ttl                    : 255 Multiplier                    : 3 Received_remote_diag          : No Diagnostic Received_remote_state         : up Remote_address                : 172.16.161.254 Remote_admin_down             : false Remote_diag                   : No Diagnostic Remote_discr                  : 1090519042 Remote_min_rx_interval        : 500 Remote_min_tx_interval        : 500 Remote_multiplier             : 3 Remote_state                  : up Router                        : a2ea4cbc-c486-46a1-a663-c9c5815253af Router_down                   : false Rx_cfg_min                    : 500 Rx_interval                   : 500 Service-link                  : false Session_type                  : LR_PORT State                         : up Tx_cfg_min                    : 500 Tx_interval                   : 500 ny-nsxt-edge-node-21(tier0_sr)> I would really like to emphasize, that static routing with NSX-T Edge Transport Node in A/A mode must use BFD to avoid blackholing. In case of BFD for static routing is not supported on the ToR switches, then I highly recommend to use A/S mode with HA VIP instead or switch back to BGP. Design Option 2 - Static Routing (A/S HA VIP) The first step in design option 2 is the Tier-0 static route configuration for northbound traffic. The most common way is to configure a default route northbound. The diagram below shows the setup with the two default routes (in black) northbound. As already mentioned, HA VIP requires that both NSX-T Edge Transport Node interfaces belong to the same Layer 2 segment (NY-T0-VLAN-SEGMENT-160). A single default route with with two different Next Hops (172.16.160.254 and 172.16.161.254) is configured on the NY-T0-GATEWAY-02. With this design we could also achieve ECMP for northbound traffic towards the ToR switches. The diagram below shows the corresponding NSX-T Tier-0 Gateway static routing configuration. Please keep in mind again, that at the NSX-T Edge Transport Node level, each Edge Transport Node will have two default route entries even though we have configured only two default routes at Tier-0 Gateway level , not four. This is shown in the table below. Please note, the configuration steps how to configure the Tier-1 Gateway (NY-T1-GATEWAY-BLUE) and how to connect it to the Tier-0 Gateway is not shown. Table 9 - NSX-T Edge Transport Node Routing Table for Design Option 2 (A/S HA VIP) ny-nsxt-edge-node-22 (Service Router) ny-nsxt-edge-node-23 (Service Router) ny-nsxt-edge-node-22(tier0_sr)> get route 0.0.0.0/0 Flags: t0c - Tier0-Connected, t0s - Tier0-Static, b - BGP, t0n - Tier0-NAT, t1s - Tier1-Static, t1c - Tier1-Connected, t1n: Tier1-NAT, t1l: Tier1-LB VIP, t1ls: Tier1-LB SNAT, t1d: Tier1-DNS FORWARDER, t1ipsec: Tier1-IPSec, isr: Inter-SR, > - selected route, * - FIB route Total number of routes: 1 t0s> * 0.0.0.0/0 [1/0] via 172.16.160.253, uplink-278, 00:00:27 t0s> * 0.0.0.0/0 [1/0] via 172.16.160.254, uplink-278, 00:00:27 ny-nsxt-edge-node-22(tier0_sr)> ny-nsxt-edge-node-23(tier0_sr)> get route 0.0.0.0/0 Flags: t0c - Tier0-Connected, t0s - Tier0-Static, b - BGP, t0n - Tier0-NAT, t1s - Tier1-Static, t1c - Tier1-Connected, t1n: Tier1-NAT, t1l: Tier1-LB VIP, t1ls: Tier1-LB SNAT, t1d: Tier1-DNS FORWARDER, t1ipsec: Tier1-IPSec, isr: Inter-SR, > - selected route, * - FIB route Total number of routes: 1 t0s> * 0.0.0.0/0 [1/0] via 172.16.160.253, uplink-279, 00:00:57 t0s> * 0.0.0.0/0 [1/0] via 172.16.160.254, uplink-279, 00:00:57 ny-nsxt-edge-node-23(tier0_sr)> The second step is to configure static routing southbound from the ToR switches towards NSX-T Edge Transport Node. Each ToR switch is configured with two static routes to forward traffic to the destination overlay networks (overlay segments 172.16.242.0/24 and 172.16.243.0/24) within NSX-T. For each of the static routes the Next Hop is the NSX-T Tier-0 Gateway HA VIP. The table below shows the static routing configuration on the ToR switch and the resulting routing table. The Next Hop is the Tier-0 Gateway HA VIP 172.16.160.24 for all static routes. Table 10 - Nexus ToR Switches Static Routing Configuration and Resulting Routing Table for Design Option 2 (A/S HA VIP) NY-N3K-LEAF-10 NY-N3K-LEAF-11 ip route 172.16.242.0/24 Vlan160 172.16.160.24 ip route 172.16.243.0/24 Vlan160 172.16.160.24 ip route 172.16.242.0/24 Vlan160 172.16.160.24 ip route 172.16.243.0/24 Vlan160 172.16.160.24 NY-N3K-LEAF-10# show ip route static IP Route Table for VRF "default" '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] '%<string>' in via output denotes VRF <string> 172.16.240.0/24, ubest/mbest: 2/0     *via 172.16.160.20, Vlan160, [1/0], 02:51:34, static     *via 172.16.160.21, Vlan160, [1/0], 02:51:41, static 172.16.241.0/24, ubest/mbest: 2/0     *via 172.16.160.20, Vlan160, [1/0], 02:51:34, static     *via 172.16.160.21, Vlan160, [1/0], 02:51:41, static 172.16.242.0/24, ubest/mbest: 1/0     *via 172.16.160.24, Vlan160, [1/0], 02:55:42, static 172.16.243.0/24, ubest/mbest: 1/0     *via 172.16.160.24, Vlan160, [1/0], 02:55:42, static NY-N3K-LEAF-10# NY-N3K-LEAF-11# show ip route static IP Route Table for VRF "default" '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] '%<string>' in via output denotes VRF <string> 172.16.240.0/24, ubest/mbest: 2/0     *via 172.16.161.20, Vlan161, [1/0], 02:53:04, static     *via 172.16.161.21, Vlan161, [1/0], 02:53:12, static 172.16.241.0/24, ubest/mbest: 2/0     *via 172.16.161.20, Vlan161, [1/0], 02:53:04, static     *via 172.16.161.21, Vlan161, [1/0], 02:53:12, static 172.16.242.0/24, ubest/mbest: 1/0     *via 172.16.160.24, Vlan160, [1/0], 02:55:03, static 172.16.243.0/24, ubest/mbest: 1/0     *via 172.16.160.24, Vlan160, [1/0], 02:55:03, static NY-N3K-LEAF-11# Failover Sanity checks The table below Table 11 - Failover Sanity Check Failover Case NY-N3K-LEAF-10 (Routing Table) NY-N3K-LEAF-11 (Routing Table) Comments All NSX-T Edge Transport Nodes are UP NY-N3K-LEAF-10# show ip route static IP Route Table for VRF "default" '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] '%<string>' in via output denotes VRF <string> 172.16.240.0/24, ubest/mbest: 2/0     *via 172.16.160.20, Vlan160, [1/0], 00:58:27, static     *via 172.16.160.21, Vlan160, [1/0], 00:58:43, static 172.16.241.0/24, ubest/mbest: 2/0     *via 172.16.160.20, Vlan160, [1/0], 00:58:27, static     *via 172.16.160.21, Vlan160, [1/0], 00:58:43, static 172.16.242.0/24, ubest/mbest: 1/0     *via 172.16.160.24, Vlan160, [1/0], 01:02:47, static 172.16.243.0/24, ubest/mbest: 1/0     *via 172.16.160.24, Vlan160, [1/0], 01:02:47, static NY-N3K-LEAF-10# NY-N3K-LEAF-11# show ip route static IP Route Table for VRF "default" '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] '%<string>' in via output denotes VRF <string> 172.16.240.0/24, ubest/mbest: 2/0     *via 172.16.161.20, Vlan161, [1/0], 00:59:10, static     *via 172.16.161.21, Vlan161, [1/0], 00:59:25, static 172.16.241.0/24, ubest/mbest: 2/0     *via 172.16.161.20, Vlan161, [1/0], 00:59:10, static     *via 172.16.161.21, Vlan161, [1/0], 00:59:25, static 172.16.242.0/24, ubest/mbest: 1/0     *via 172.16.160.24, Vlan160, [1/0], 01:01:21, static 172.16.243.0/24, ubest/mbest: 1/0     *via 172.16.160.24, Vlan160, [1/0], 01:01:21, static NY-N3K-LEAF-11# NSX-T Edge Transport Node ny-nsxt-edge-node-20 is DOWN All other NSX-T Edge Transport Node are UP NY-N3K-LEAF-10# show ip route static IP Route Table for VRF "default" '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] '%<string>' in via output denotes VRF <string> 172.16.240.0/24, ubest/mbest: 1/0     *via 172.16.160.21, Vlan160, [1/0], 01:01:01, static 172.16.241.0/24, ubest/mbest: 1/0     *via 172.16.160.21, Vlan160, [1/0], 01:01:01, static 172.16.242.0/24, ubest/mbest: 1/0     *via 172.16.160.24, Vlan160, [1/0], 01:05:05, static 172.16.243.0/24, ubest/mbest: 1/0     *via 172.16.160.24, Vlan160, [1/0], 01:05:05, static NY-N3K-LEAF-10# NY-N3K-LEAF-11# show ip route static IP Route Table for VRF "default" '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] '%<string>' in via output denotes VRF <string> 172.16.240.0/24, ubest/mbest: 1/0     *via 172.16.161.21, Vlan161, [1/0], 01:01:21, static 172.16.241.0/24, ubest/mbest: 1/0     *via 172.16.161.21, Vlan161, [1/0], 01:01:21, static 172.16.242.0/24, ubest/mbest: 1/0     *via 172.16.160.24, Vlan160, [1/0], 01:03:17, static 172.16.243.0/24, ubest/mbest: 1/0     *via 172.16.160.24, Vlan160, [1/0], 01:03:17, static NY-N3K-LEAF-11# Route entries with ny-nsxt-edge-node-20 (172.16.160.20 and 172.16.161.20) are removed by BFD NSX-T Edge Transport Node ny-nsxt-edge-node-21 is DOWN All other NSX-T Edge Transport Node are UP NY-N3K-LEAF-10# show ip route static IP Route Table for VRF "default" '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] '%<string>' in via output denotes VRF <string> 172.16.240.0/24, ubest/mbest: 1/0     *via 172.16.160.20, Vlan160, [1/0], 00:02:40, static 172.16.241.0/24, ubest/mbest: 1/0     *via 172.16.160.20, Vlan160, [1/0], 00:02:40, static 172.16.242.0/24, ubest/mbest: 1/0     *via 172.16.160.24, Vlan160, [1/0], 01:12:13, static 172.16.243.0/24, ubest/mbest: 1/0     *via 172.16.160.24, Vlan160, [1/0], 01:12:13, static NY-N3K-LEAF-10# NY-N3K-LEAF-11# show ip route static IP Route Table for VRF "default" '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] '%<string>' in via output denotes VRF <string> 172.16.240.0/24, ubest/mbest: 1/0     *via 172.16.161.20, Vlan161, [1/0], 00:03:04, static 172.16.241.0/24, ubest/mbest: 1/0     *via 172.16.161.20, Vlan161, [1/0], 00:03:04, static 172.16.242.0/24, ubest/mbest: 1/0     *via 172.16.160.24, Vlan160, [1/0], 01:10:28, static 172.16.243.0/24, ubest/mbest: 1/0     *via 172.16.160.24, Vlan160, [1/0], 01:10:28, static NY-N3K-LEAF-11# Route entries with ny-nsxt-edge-node-21 (172.16.160.21 and 172.16.161.21) are removed by BFD NSX-T Edge Transport Node ny-nsxt-edge-node-22 is DOWN All other NSX-T Edge Transport Node are UP NY-N3K-LEAF-10# show ip route static IP Route Table for VRF "default" '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] '%<string>' in via output denotes VRF <string> 172.16.240.0/24, ubest/mbest: 2/0     *via 172.16.160.20, Vlan160, [1/0], 00:06:55, static     *via 172.16.160.21, Vlan160, [1/0], 00:00:09, static 172.16.241.0/24, ubest/mbest: 2/0     *via 172.16.160.20, Vlan160, [1/0], 00:06:55, static     *via 172.16.160.21, Vlan160, [1/0], 00:00:09, static 172.16.242.0/24, ubest/mbest: 1/0     *via 172.16.160.24, Vlan160, [1/0], 01:16:28, static 172.16.243.0/24, ubest/mbest: 1/0     *via 172.16.160.24, Vlan160, [1/0], 01:16:28, static NY-N3K-LEAF-10# NY-N3K-LEAF-11# show ip route static IP Route Table for VRF "default" '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] '%<string>' in via output denotes VRF <string> 172.16.240.0/24, ubest/mbest: 2/0     *via 172.16.161.20, Vlan161, [1/0], 00:07:01, static     *via 172.16.161.21, Vlan161, [1/0], 00:00:16, static 172.16.241.0/24, ubest/mbest: 2/0     *via 172.16.161.20, Vlan161, [1/0], 00:07:01, static     *via 172.16.161.21, Vlan161, [1/0], 00:00:16, static 172.16.242.0/24, ubest/mbest: 1/0     *via 172.16.160.24, Vlan160, [1/0], 01:14:25, static 172.16.243.0/24, ubest/mbest: 1/0     *via 172.16.160.24, Vlan160, [1/0], 01:14:25, static NY-N3K-LEAF-11# A single NSX-T Edge Transport Node down used for HA VIP does not change the routing table NSX-T Edge Transport Node ny-nsxt-edge-node-23 is DOWN All other NSX-T Edge Transport Node are UP NY-N3K-LEAF-10# show ip route static IP Route Table for VRF "default" '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] '%<string>' in via output denotes VRF <string> 172.16.240.0/24, ubest/mbest: 2/0     *via 172.16.160.20, Vlan160, [1/0], 00:10:58, static     *via 172.16.160.21, Vlan160, [1/0], 00:04:12, static 172.16.241.0/24, ubest/mbest: 2/0     *via 172.16.160.20, Vlan160, [1/0], 00:10:58, static     *via 172.16.160.21, Vlan160, [1/0], 00:04:12, static 172.16.242.0/24, ubest/mbest: 1/0     *via 172.16.160.24, Vlan160, [1/0], 01:20:31, static 172.16.243.0/24, ubest/mbest: 1/0     *via 172.16.160.24, Vlan160, [1/0], 01:20:31, static NY-N3K-LEAF-10# NY-N3K-LEAF-11# show ip route static IP Route Table for VRF "default" '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] '%<string>' in via output denotes VRF <string> 172.16.240.0/24, ubest/mbest: 2/0     *via 172.16.161.20, Vlan161, [1/0], 00:11:30, static     *via 172.16.161.21, Vlan161, [1/0], 00:04:45, static 172.16.241.0/24, ubest/mbest: 2/0     *via 172.16.161.20, Vlan161, [1/0], 00:11:30, static     *via 172.16.161.21, Vlan161, [1/0], 00:04:45, static 172.16.242.0/24, ubest/mbest: 1/0     *via 172.16.160.24, Vlan160, [1/0], 01:18:54, static 172.16.243.0/24, ubest/mbest: 1/0     *via 172.16.160.24, Vlan160, [1/0], 01:18:54, static NY-N3K-LEAF-11# A single NSX-T Edge Transport Node down used for HA VIP does not change the routing table NSX-T Edge Transport Node ny-nsxt-edge-node-20 and ny-nsxt-edge-node-21 are DOWN All other NSX-T Edge Transport Node are UP NY-N3K-LEAF-10# show ip route static IP Route Table for VRF "default" '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] '%<string>' in via output denotes VRF <string> 172.16.242.0/24, ubest/mbest: 1/0     *via 172.16.160.24, Vlan160, [1/0], 01:24:06, static 172.16.243.0/24, ubest/mbest: 1/0     *via 172.16.160.24, Vlan160, [1/0], 01:24:06, static NY-N3K-LEAF-10# NY-N3K-LEAF-11# show ip route static IP Route Table for VRF "default" '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] '%<string>' in via output denotes VRF <string> 172.16.242.0/24, ubest/mbest: 1/0     *via 172.16.160.24, Vlan160, [1/0], 01:22:54, static 172.16.243.0/24, ubest/mbest: 1/0     *via 172.16.160.24, Vlan160, [1/0], 01:22:54, static NY-N3K-LEAF-11# All route entries related to design option 1 are removed by BFD I hope you had a little bit of fun reading this blog post about a static routing with NSX-T. Now with the knowledge how to archive ECMP with static routing, you might have a new and interessting design option for your customers NSX-T deployments. Software Inventory: vSphere version: VMware ESXi, 6.5.0, 15256549 vCenter version: 6.5.0, 10964411 NSX-T version: 3.0.0.0.0.15946738 (GA) Cisco Nexus 3048 NX-OS version: 7.0(3)I7(6) Blog history Version 1.0 - 08.07.2020 - first published version Version 1.1 - 09.07.2020 - minor changes Version 1.2 - 30.07.2020 - grammar updates - thanks to James Lepthien
Have one 9460-16i, can't get it to work with 4TB Western Digital U.2 Nmve,  storecli works and shows the drives, but any virtual disk created is not recognized by Vsphere
Thanks for this! I was hitting the 400 Error Code: GENERAL_NONSUCCESS" Error from Okta after doing the Certificate auth, and it turned out that the new Okta app you create in Workspace One Access... See more...
Thanks for this! I was hitting the 400 Error Code: GENERAL_NONSUCCESS" Error from Okta after doing the Certificate auth, and it turned out that the new Okta app you create in Workspace One Access needs its Signature Algorithm set to SHA-256 instead of SHA-1. After that Okta wasn't giving me the "failure : Unable to validate incoming SAML Assertion" error.
VMware製品の調査をしていると、製品が内部に持つ管理DBの中身を見ることがあります。 管理DBの中身を見る方法は、実機で直接DBに接続してSQLコマンドをたたく方法と、Dumpされたテキストファイルを見る方法がありますが、 このブログでは3回に分けてDumpされたDBをローカルのPostgreSQLにインポートして、ローカルで調査する方法を紹介したいと思います。 第一回はSDD... See more...
VMware製品の調査をしていると、製品が内部に持つ管理DBの中身を見ることがあります。 管理DBの中身を見る方法は、実機で直接DBに接続してSQLコマンドをたたく方法と、Dumpされたテキストファイルを見る方法がありますが、 このブログでは3回に分けてDumpされたDBをローカルのPostgreSQLにインポートして、ローカルで調査する方法を紹介したいと思います。 第一回はSDDC ManagerとvCenterの内部管理DBのDump方法について紹介します。 ※筆者はPostgreSQLに関する知識は素人に毛が生えたレベルですので不正確な部分についてご容赦ください。また、DB Dumpなどの方法についてはVMwareから提供されている方法を正としていただき、本ブログはあくまでも参考情報としてご利用ください。 第2回、第3回の記事は以下です。 vCenterとSDDC ManagerのDBをローカルPCにインポートする方法【PostgreSQLのインストール】 vCenterとSDDC ManagerのDBをローカルPCにインポートする方法【DBをインポートして閲覧する】 全体の流れを確認 DB Dumpの取得方法を説明する前に、3回にわたる今回の記事の流れを説明します。 ゴールはVCSAとSDDC ManagerのDBの中身をローカルPCにInstall したPostgreSQL DBMSで閲覧することです。 DBの情報をローカルにコピーする必要があるため、DBの中身をDumpし、それをインポートする流れになります。 そのため、第一回では対象のDBのDumpを取得する方法を説明します。 第二回では実際にDB DumpをインポートするPostgreSQL DBをWindows PCにInstallする手順を示します。 第三回では、DB Dumpを実際にインポートする手順と注意事項などについて説明します。 SDDC ManagerのDBダンプ取得方法 SDDC Managerの管理DBのDump方法は非常に簡単です。 SDDC Managerのログバンドルを取得するとその中に含まれています。 SDDC Managerはsos utilityで取得できますが、実行の際に--sddc-manager-logs オプションをつけることでSDDC Managerのログバンドルを明示的に指定して取得可能です。 sos utilityを用いたログ採取については下記もご参考ください。 Cloud Foundation システムのログの収集 ログバンドルの中にpostgres-db-dumpというファイルがありますので、こちらがDB Dumpに相当します。 VCDB (vCenter内部管理DB)のダンプ取得方法 こちらについては、VMwareの公式情報が見つからなかったのであくまでもPostgreSQL観点での実施方法になります。 なお、今回の検証で利用しているのは、vCenter Server Applianceの6.7.0-15132721です。 0. VCSAのサービスを停止 DB dumpを取得する前に、VCSAのサービスを停止しましょう。 以下のVMware KBを参考にしてください。。 https://kb.vmware.com/s/article/2109887?lang=ja 1. postgres ユーザのパスワードを確認する VCSA内のPostgreSQL DBのpostgres ユーザのパスワードを確認します。 パスワードは.pgpassファイルに書かれており、以下のコマンドで中身を確認できます。 # cat /root/.pgpass 2. postgreSQL DBのDumpを取得する パスワードを確認したらDB Dumpを取得します。 DBのDumpはサイズが大きくなることが想定されるのであらかじめ、容量に余裕のある場所に生成するようにします。 本記事では/storage/core 配下に保存することにします。 /storage/coreはデフォルトで50GBが割り当てられており、障害が発生しない限りほとんど空いてます。 #  cd /storage/core/ 次にpg_dumpallコマンドでダンプを取得します。以下のコマンドを実行するだけでOKです。 #  pg_dumpall -U postgres > vcdb_dump ファイル名はなんでもいいです。 コマンドを実行するとパスワードが求められますので、1.のステップで確認したパスワードを入力してください。 パスワードを間違えるとエラーメッセージが出ますが、何も出ずにプロンプトが返れば成功です。 3. TableSpaceのファイルもまとめて.tgzに固める VCDBの一部は別の場所に保存されているのでそちらも採取しておく必要があります。 2.のステップで取得したDumpファイルと合わせてtarで固めて圧縮し、転送しやすくしましょう。 以下のコマンドで十分です。 # tar cvzf vcdb_dump.tgz vcdb_dump /storage/seat/vpostgres/* vcdb_dump.tgz がカレントディレクトリ(/storage/core)に生成されているはずなので、これをローカルのPCに転送しましょう。 4. VCSAのサービス起動 DB Dump採取前に停止したサービスは忘れずに起動してください。 今回は、vCenterとSDDC Managerの内部管理用PostgreSQL DBMSのDump方法について紹介しました。 次回はローカルPCにPostgreSQL DBMSをインストールする方法を紹介します。 vCenterとSDDC ManagerのDBをローカルPCにインポートする方法【PostgreSQLのインストール】
Excellent info post. I Have Been wondering about this issue, so thanks for posting. Pretty cool post.It ‘s really very nice and Useful post.Thanks however, can you assist me about this FMWhats... See more...
Excellent info post. I Have Been wondering about this issue, so thanks for posting. Pretty cool post.It ‘s really very nice and Useful post.Thanks however, can you assist me about this FMWhatsApp app can i use this method to this app.
Otherwise, users are being provisioned flawlessly. Just groups won't come over
Thanks so much @StevenDSa for helping. Below is my check Make sure there is no directory configured at the customer level OG in UEM --> No directory is configured in UEM Make sure custom... See more...
Thanks so much @StevenDSa for helping. Below is my check Make sure there is no directory configured at the customer level OG in UEM --> No directory is configured in UEM Make sure customer level OG is specified in WS1 Access. --> yes Assuming the same group does not already exist in UEM? --> yep, I ensure no WS1 Authorization exist in UEM prior. Make sure Chrome autofill did not change the username/password for Provisioning tab --> I disable the Chrome auto fill. Make sure the Admin Password typed in correctly. Test connection is success. Wait for it to fail (might take some time). Hopefully the error can point us in the right direction or give us the ability to deprovision. ---> Waited two days now --> Further troubleshooting steps --> Check mark the WS1 Authorization group --> Deprovision --> Wait for it to be gone under Group Provisioning --> Re-add the same group for Provisioning --> It kicked into "Ready for Provision" --> then go into "Provisioning" --> stuck there and won't fail at all. No matter how long i would wait. ---> Proceed to create another seperate blank group (contain no users) in Workspace ONE Access --> Access AirWatch Provisioning app --> Group Provisioning --> Add that blank group --> still stuck at Provisioning and won't fail. Maybe contact support? could be something is wrong with my VIDM tenant? Andre
That’s why we have provided a lot of unique Happy Diwali wishes there. Check below and Grab the Happy Diwali Wishes in Hindi Status, Diwali Wishes Message in Hindi. Also, we Provided Happy Diwali... See more...
That’s why we have provided a lot of unique Happy Diwali wishes there. Check below and Grab the Happy Diwali Wishes in Hindi Status, Diwali Wishes Message in Hindi. Also, we Provided Happy Diwali wishes in Hindi font, so you can easily copy & share from there.
I've not seen that one before. Here are some suggestions: Make sure there is no directory configured at the customer level OG in UEM Make sure customer level OG is specified in WS1 Access. As... See more...
I've not seen that one before. Here are some suggestions: Make sure there is no directory configured at the customer level OG in UEM Make sure customer level OG is specified in WS1 Access. Assuming the same group does not already exist in UEM? Make sure Chrome autofill did not change the username/password for Provisioning tab Wait for it to fail (might take some time). Hopefully the error can point us in the right direction or give us the ability to deprovision.
Hello StevenDSa​, I am configuring Okta SCIM users & groups to WS1 Access then WS1 Access to provision the same sets of users and groups to UEM. The first part from Okta to WS1 Access has no i... See more...
Hello StevenDSa​, I am configuring Okta SCIM users & groups to WS1 Access then WS1 Access to provision the same sets of users and groups to UEM. The first part from Okta to WS1 Access has no issue. From WS1 Access to UEM, the AirWatch Provisioning app can only provision users to UEM. Group Provisioning just stuck at provisioning and never actually provision the groups to UEM. Any chance you know what it could cause the issue where group provioning just stuck there? Thank you
That did it!  Thanks!
Can you change your UEM settings from Redirect to Post?
Using this link I am forwarded to Okta for SSO then I can see the trace to WS1 Access, and then from Access to UEM but I am still getting "Login Failed, please try again." At this point I would g... See more...
Using this link I am forwarded to Okta for SSO then I can see the trace to WS1 Access, and then from Access to UEM but I am still getting "Login Failed, please try again." At this point I would guess its an issue with the username values or something similar.  How would I be able to confirm this? I did run a saml trace on this login event and can see UEM sending to Access, Then Access to Okta, Okta replys to Access and then Access forward back to UEM where we still get the same error "Invalid User Credentials ?? An unexpected error occurred." I checked in the Applications for UEM and Access and can not find any mismatch on pass values..  I am overlooking something? Here is the UEM Console SAML Settings Here are the 3 Applications in Access (The first two were created and provisioned by the UEM client when setting up) Each of them is configured as below: The provisioning is all working as expected and configured similarly I highlighted in red the ones I thought could be the issue but how would I go about finding out what other possible values these could have?
I don't know if I'll have the right answers for you but based on your comment "From the web using cnXXXX.awmdm.com we are not redirected to Okta and get "Invalid credentials, Try again."  If ... See more...
I don't know if I'll have the right answers for you but based on your comment "From the web using cnXXXX.awmdm.com we are not redirected to Okta and get "Invalid credentials, Try again."  If you go to in your browser: https://cnXXXX.awmdm.com/mydevice/Login?gid=OG You should be redirected to WS1 Access and then Redirected to Okta.  If this is not happening then you most likely have an issue in Enterprise Integration -> Directory Services (Specifically Under SAML Settings) If you are having a problem on the return, its probably an issue with the values being passed in the response are not matching the UEM user.
For reference as I forgot this in the first post - We are on UEM 2007 For Hub Services I access that normally though the the UEM apps menu (9 dots at the top right) then I am forwarded to our ... See more...
For reference as I forgot this in the first post - We are on UEM 2007 For Hub Services I access that normally though the the UEM apps menu (9 dots at the top right) then I am forwarded to our DOMAIN.workspaceoneaccess.com/catalog-portal/admin-console#/uem If that is the right area that you are speaking of then it is configured as follows and does not seem to have a way to change or update the settings. Under Workspace ONE Hub Services - System Settings we don't seem to have any way to change it.  Currently this has only two things System Settings Mobile Flows URL: Our URL UEM Integration API URL: Our asXXXX.awmdm.com API Certificate: Our Cert Certificate Password: Our Password Admin API Key: Our API Key Group ID: Our top OG Device Services URL: Our dsXXXX.awmdm.com/DeviceServices We also have the Airwatch app and the AirWatch Provisioning Applications both configured in ACCESS and with the same groups assigned to them. In UEM we are configured to use SAML and that is pointing to ACCESS. For our second issue is all that is needed here is to switch the Okta group type then re-sync those groups?  I will have to check with our Okta admin to be sure but I would bet we are using the assignment groups.