Skip navigation
1 2 3 Previous Next

Virtual Pharaohs

38 posts

Hi all ..

Wish you happy new year ..

I want to inform all of you that my blog will be moved to its own domain and own website:, starting next Saturday 10th Jan. 2015. 
The website is already out, but there's no posts there. Ill move all posts from here to the new website as well as, modify RSS feeds of Planet V12N blog to the new website.
I'll not close this blog now, I'll leave it till I finish moving all articles after updating them if needed.


I'm so happy and sad in the same time. Happy for having my own website and domain and sad for leaving this blog after some months here.
Special thanks for VMware for hosting my blog for these months and I hope that I'd give all of you usefull content on the new web site:

Thanks all


Share the news ...

Hi All ..


Let's now talk about Microsoft SharePoint 2013. MS SharePoint is one of the most complex Microsoft products. It's a multi-tier product, that each tier can be scaled individually. Quoting the following from, as definition of Sharepoint:

"SharePoint can provide intranet portals, document and file management, collaboration, social networks, extranets, websites, enterprise search, and business intelligence. It also has system integration, process integration, and workflow automation capabilities."

MS SharePoint consists mainly of three tiers: Web Interface, Application Server and Database. Each of these tiers needs a defined level of performance, availability and scalability. vSphere 5.x can easily provide the required level of performance, availability and scalability due to its flexibility, its ability to host different types of workloads and the advanced features that vSphere has to provide the required level of availability and scalability, like: vSphere vMotion, vSphere HA and vSphere DRS.

For more information, visit the new URL: Virtualizing Microsoft SharePoint 2010/2013 on vSphere 5 Best Practices.

Hi all ...


In our next post, we will talk about Oracle Databases as our next candidate of Virtualization. Oracle DBs are known to be heavy duty, Tier 0 or Tier 1 servers that losing it can lead to business disruption significantly.
Oracle is a Strategic Partner of VMware. This provides a great support and compatibility of Oracle DBs above or below vSphere Platform. As we'll see in following sections, Oracle DB is supported as a virtual application above vSphere Platform or as a store for vSphere data used by vCenter Server for example.


As Microsoft SQL Servers, Oracle Applications and Database has its own features that are aligned with vSphere features to push Oracle Databases performance, availability and scalability to another level. Oracle Real Application Clusters (RAC) feature is completely supported on vSphere 5.x and can be used with vSphere HA to push databases availability towards the magic five 9's.
For more info, check the new URL: Virtualizing Oracle Databases on vSphere 5 Best Practices

Hi All ...


Today, we will talk about Microsoft SQL Server Virtualization. SQL Databases have been in our data centers since 1990s carrying TBs of data that are stored and retrieved by many and many of applications. Nowadays, no business application can run without a SQL back-end DB. This leads to a "Spree" of databases in our data centers, pushing Microsoft SQL Server to be one of our most critical applications that spans both production and non-production regions. The latest editions now are Microsoft SQL Server 2012 and 2014.


We'll talk about how to virtualize Microsoft SQL Server using vSphere 5.x platform and how to leverage new SQL Server features, like: Always-on Availability Groups (AAG) and Data Mirorring, to provide the required level of performance and availability. For more information, check the new URL: Virtualizing Microsoft SQL Server 2012/2014 on vSphere 5 Best Practices.

Hi all ...


During my journey towards VCAP-DCD Certification, I found a nice topic on the exam blueprint. It's about "Gathering and Analysis Business Application Requirements". When I began to examine it, I found that it was not only about as stated by its headline, but about also the best practices to deploy some Business Critical Applications (BCAs) in your vSphere environment. It included all of "Microsoft Exchange, SQL and Sharepoint, Enterprise Jave Applications, SAP HANA & Oracle". These applications are considered -in most of environments if not all- Tier 1 applications that require wide eyes and careful attention when dealing with and sepcially when migrating to virtual world on vSphere Infrastructure.


I tried to summarize all I could found during my readings in this topic,as I know it's a critical topic and mastering it requires some deep knowledge about vSphere capabilities and how to leverage them to serve these applications. In addition, this topic is a point of VCAP-DCD exam blueprint and one of its most tricky points if not the most at all.


First, let's define what a Business Critical Application (BCA) is:
"Business Critical Application is the one without which the business is either stopped or suffers great losses in its revenue. It's critical to lose that application and business requires always the highest levels of performance, availability and recoverability -in case of a disaster- for this application"


Now, someone will ask about the reason for taking the difficult road of virtualizing BCAs as long as they're running physically without any problems. The answer is plain simple: Better availability, same performance and may be better in case of scaling out, easier recovery and all for lower cost. vSphere Platform is capable of delivering the requirements of these applications of performance. In addition, VMware has its own HA capabilities that can be used solely or with another clustering solutions for highest levels of availability. HA isn't only the clustering feature available, VMware offers another clustering feature: DRS, which helps to load balance and distribute the load on many ESXi Hosts to maintain the required performance for BCAs while not affecting other lower-tiers applications. Last but not least, VMware offers its own DR solution: Site Recovery Manager (SRM), which automates the process of DR as well as allowing the responsible personnel to test their DR plan whenever they want.


After defining these two points, now we will discuss the best practices to deploy Business Critical Applications in your vSphere environment and will include all of:
Microsoft AD DS, Microsoft Cluster Services, Microsoft Exchange, Microsoft SQL, Microsft Sharepoint, Oracle DB, SAP HANA & Enterprise Java Applications

I tried as much as possible to make it related to the main Design Qualifiers (Availability, Manageability, Performance, Recoverability and Security - AMPRS). I also added another aspect: Scalability, as I felt that this aspect is important to consider when designing for such applications. When applicable, Cost also is considered against all of these qualifiers.

Now, let's start:

1- Virtualizing Microsoft Active Directory Domain Services (AD DS)-Windows 2012 on vSphere Best Practices.

2- Virtualizing Microsoft Clustering Services (MSCS)-Windows 2012 on vSphere Best Practices.

3- Virtualizing Microsoft Exchange Best Practices.

4- Virtualizing Microsoft SQL 2012/2014 Best Practices.

5- Virtualizing Microsoft SharePoint 2013 Best Practices.

6- Virtualizing Oracle DB Best Practices.

7- Virtualizing SAP HANA Best Practices.

8- Virtualizing Enterprise Java Applications Best Practices.


Share the knowledge ...


Update Log:

-- 25/11/2014: Added Virtualizing MSCS Best Practices Part hyperlink.

-- 07/12/2014: Replaced the old introduction to update the URL itself and update my status as VCAP-DCD certified.

-- 14/12/2014: Added Virtualizing Microsoft SQL 2012/2014 Best Practices Part hyperlink.

-- 16/12/2014: Added Virtualizing Oracle Databases Best Practices Part hyperlink.

-- 18/12/2014: Added Virtualizing Microsoft SharePoint 2013 Best Practices Part hyperlink.

Hi All ...


Microsoft Clustering Services (MSCS) is one of the first HA solutions in our IT world and one of the hardest to configure. Although, I don't have personal experience with MS Failover Clustering, but I know the severe pain of deploying, testing and troubleshooting this solution. Microsoft's developed this solution so much since its first version. Available versions now are: MS Clustering Service on Windows 2008 R2 and Windows 2012 and 2012 R2. With vSphere 5.x, MSCS can now be virtualized and it's fully supported by Microsoft.

For more information, visit the new URL: Virtualizing Microsoft Clustering Services Windows2012 on vSphere 5 Best Practices on VirtualPharaohs Blog.

Active Directory Domain Services (AD DS) is the core of our IT Infrastructure nowadays. It's the authentication and authorization center of any IT Infrastructure. Being here since 1990's, AD DS has been through a great development till reached this version on Windows 2012 with many new features. Luckily, some of these are only for helping virtualizing Domain Controllers (DC's) with min. effort and to leverage all of Virtualization advantages and features.

For more information, visit the new URL: Virtualizing Microsoft Active Directory Domain Services (AD DS)-Windows 2012 on vSphere 5 Best Practices

Hi All


During some readings, I remembered the eternal debatable Question when creating SMP VM (VM with many vCPUs):

"Which is better: many Cores in a single socket or many Sockets each with single core..??"

I remember how many times I debated for hours with my reporting manager -while reviewing some designs- about the same question, but neither of us could prove a bit.

To answer this question, we have to review some concepts.


NUMA CPU Configuration: Non Uniform Memory Access (NUMA) is a CPUs configuration, in which each CPU has some Memory DIMMs local and connected to it. Each CPU can access both its local memory DIMMs with lowest latency and the remote DIMMs with higher latency using Interconnecting Bus.

Many vExperts, and VMware itself, talked about this when if first came up supported in vSphere 3.5 (If I remember correctly!!) and how vSphere Platform is using NUMA Configuration to support SMP VMs. The best articles you can read about it and how it was supported are Frank Denneman's articles:

Sizing VMs and NUMA nodes -

ESXi 4.1 NUMA Scheduling -

Node Interleaving: Enable or Disable? -


vNUMA CPU Configuration: vNUMA is Virtual NUMA. On a NUMA vSphere Host, when hosting large VM (9+ vCPUs), the host can expose NUMA configuration to the VM to gain additional performance boost. It was introduced with vSphere 5.0 and to gain this boost, VM hardware level must be 8+ and Guest OS and Guest Applications must be NUMA-aware.


Now let's quote the phrase from vSphere 5.5 ePubs that gave me a key to solve the mystery

You can affect the virtual NUMA topology with two settings in the vSphere Web Client: number of virtual sockets and number of cores per socket for a virtualmachine. If the number of cores per socket (cpuid.coresPerSocket) is greater than one, and the number of virtual cores in the virtual machine is greater than 8, the virtual NUMA node size matches the virtual socket size. If the number of cores per socket is less than or equal to one, virtual NUMA nodes are created to match the topology of the first physical host where the virtual machine is powered on.

Looking for those Advanced Settings, and I found those:

cupid.coresPerSocket & numa.vcpu.maxPerVirtualNode.

Referring to them here:





Determines the number of virtual cores per virtualCPU socket. If the value is greater than 1, also determines the size of virtual NUMA nodes if avirtual machine has a virtual NUMA topology. You can set this option if you know the exact virtualNUMA topology for each physical host.


numa.vcpu.maxPerVirtualNode If cpuid.coresPerSocket is too restrictive as a power of two, you can setnuma.vcpu.maxPerVirtualNode directly. In this case, do not set cpuid.coresPerSocket. 8

So, the first one is the number of cores per single vCPU socket, and the other one is controlling some behavior (Not clear, ha..??! )


Googling and searching about these two options lead me to this one of  VMware Blogs -by Mark Achtemichuk- about how number of cores per socket affects the performance and it solved all

Does corespersocket Affect Performance? | VMware vSphere Blog - VMware Blogs


According to the article, controlling number of cores per socket in VMs was introduced by VMware for just Licensing issues before vSphere 5.0 and later. When vNUMA was introduced in vSphere 5.0, another player came in the ground.

#1 When creating a virtual machine, by default, vSphere will create as many virtual sockets as you’ve requested vCPUs and the cores per socket is equal to one. I think of this configuration as “wide” and “flat.” This will enable vNUMA to select and present the best virtual NUMA topology to the guest operating system, which will be optimal on the underlying physical topology.

#2 When you must change the cores per socket though, commonly due to licensing constraints, ensure you mirror physical server’s NUMA topology. This is because when a virtual machine is no longer configured by default as “wide” and “flat,” vNUMA will not automatically pick the best NUMA configuration based on the physical server, but will instead honor your configuration – right or wrong – potentially leading to a topology mismatch that does affect performance.

Now let's put all the pieces together:

1-) When Creating any VM: By default, vSphere chooses to make by default the number of cores per vCPU socket is 1 (cupid.coresPerSocket=1 by default). In this case, and when the VM has more than 8 vCPUs (8 virtual sockets each with single virtual core), Virtual NUMA topology is created by default for this VM that comply with underlying physical NUMA topology, i.e. the vCPU configuration is modified automatically to comply with underlying physical NUMA. The max. number of cores in single virtual NUMA node -created by automatic Virtual NUMA which will equal the number of physical cores per NUMA node- is controlled by"numa.vcpu.maxPerVirtualNode" which is set by default to 8 (has to be changed if the physical NUMA node is greater than 8 physical cores).

2-) To enable Virtual NUMA manually on a large VM: you can set manually the number of cores per virtual socket (cupid.coresPerSocket>1). In this case, Virtual NUMA configuration will be set manually as configured and it will ignore the underlying physical NUMA configuration. "numa.vcpu.maxPerVirtualNode" has no effect in this case (Check the last test in VMware blog where he set Virtual NUMA to single socket with 24 cores).

3-) To enable Virtual NUMA manually on a small VM with certain Virtual NUMA Size: you have to set “numa.vcpu.min” to less than 9 first. Then, you can either set "numa.vcpu.maxPerVirtualNode" to the required number to set the Virtual NUMA required size while setting “cupid.coresPerSocket” to 1 or set the number of virtual sockets and virtual cores manually to set the required Virtual NUMA topology (which have to comply at least with the number of nodes in underlying physical NUMA topology).


Now, we can understand importance and necessity of setting the number of Virtual Sockets and Virtual Cores correctly in any VM. We can also now answer the eternal debatable Question:

"Which is better: many Cores in a single socket or many Sockets each with single core..??"

The answer is: It depends. If you understand exactly the underlying Physical NUMA topology, you can set many cores per socket in the VM to use Virtual NUMA benefits or take the easiest approach and set the number of cores to 1.


Share the knowledge ..

**Update Log:

**13/10/2014: Added the links for NUMA CPU Configuration Section

Hi All ...


vSphere 5.0 and later came with so many enhancement for Virtual Infrastructure and Swap to Host Cache is one of these. This feature allows you to use any SSD datastore -some or all of it- as Write-back Cache to swap to it ESXi memory pages in case of Hard State memory contention. Many of vExperts wrote about it and its technical how-to configuration, like Duncan Epping in his complete-guide blog post here.


Now, you're asking: "So, why do you write this blog post?? Do you wanna to copy-paste??!"  The answer is unfortunately, NO!!

I write this blog post to answer a question that may come to your mind while reading about this awesome feature: "What is the difference between using SSD Datastore as a Host Cache and just configure the host to put VMs .vswp files in this datastore???"


While reading many sources about Host Cache feature, I didn't find anyone stated clearly any answer to this question, but the following screenshot gave me the first ray of light. It's a screenshot -from Duncan's post- of comments between him and one of his visitors who compares and asks the same question nearly.



Matt van Mater -the visitor- compared the two features: Host Cache and Dedicating certain SSD Datastore for VMs .vswp files. Duncan's answer clearly stated that the space usage would be really lower if you use Host Cache. I began to search about Write-back Cache technologies (It was my first time to deal deeply with Cache technologies) and I found that simple diagram from Wikipedia:


This simple diagram also indicates how Write-back Cache works and why it really uses small space size and gives high response. All of these things, gave me the following answer to the question above nad it was all in the underlined word Write-back Cache:

            1-) Host Cache is a Write-back cache, which means that it makes both Read/Write operations fast, as it reads and writes mainly to SSD Drive. That improves reading from swap and changing after warming up period. Only some blocks of swap files are written back to the .vswp files reside with VMs folders.

2-) Host Cache is shared between VMs, as it doesn't create a specific file for each VM like normal .vswp file. It only creates a bunch of files on Hose Cache that ESXi host will just swap to it. That makes any Read/Write operation from any VM on the host configured will benefit from a sharing probability of its memory page with any other VM (same concept like Transparent Page Sharing). This greatly reduces the chance to access .vswp file location and improves performance if the shared block under operation is on the cache, thus Host Cache needs some Warming up period.

3-) SSD Datastore size for placing swap files of N VMs= N*Size of single Swapfile (assuming equal .vswp file sizes). Using Host Cache, and due to sharing memory pages of it, this size is greatly reduced (same concept like Transparent Page Sharing).

4-) In case of using Network/FC-based SSD Datastore for placing swap files, the network latency -even when using FC SAN- is much greater than SSD access latency and hence, Host Cache -which should only configured on local SSD disks for the same reason- always gives higher performance.


I hope this clears this small mystery about the difference between Swap to Host Cache and setting SSD Datastore as a .vswp files location.
Waiting for your feedback and comments.

Special Thanks to: Duncan Epping - Matt van Mater

Share the knowledge ...

Hi all ..
During my study for VCAP-DCD, I listened to the Business Continuity and Disaster Recover Design Workshop (BCDR Workshop) by VMware. It's online course for 4.5 Hrs and can be found in this link.

This workshop is really building a base for practicing DR Sites and Business Continuity Plans and it's really helpful.  If you're VCP level or lower you may find this summary incomplete and you need to review the full modules, but if you're VCAP level or higher I think this can summary up the modules for you.


This is a summary for the first module of this workshop, which contains the most important notes I took during listening to it online. Let's Start:


Disaster Definition:

Disaster definition isn't the same for all organizations. But in general, it means a certain happening or event that causes a major damage to the business or the organization.
It may be classified based on its cause: (Natural/Man-made) or area of effect (Catastrophe: Wide Geographical Area/Disaster: Certain Building or Data Center/Service Disruption:failure of single application or component inside the Data Center). Any disaster and its effect can be mitigated using entire DR Plan or parts of it. 

DR Sites Types:

Dedicated vs. Non-dedicated: Dedicated DR site is a site with idle hardware to be used only by failed-over systems in case of a disaster, while non-dedicated DR site is a site -usually regional campus- where there’s another production environment and some of its capacity is reserved for failover in case of a disaster. Dedicated DR Sites - and only Dedicated type- can be Hot, Warm or Cold.

Hot vs. Warm vs. Cold: Hot DR site can be failed over to in case of a disaster in duration of minutes to hours. Warm DR site requires duration of hours to few days to be ready for failover. Cold DR site requires many days to be ready for failover.

Disaster Recovery Plan (DRP) vs. Business Continuity Plan (BCP):

DRP: A plan contains all procedures and steps to be made during and right after the disaster to fail all the systems to the DR site and get all the systems back online AFAP. It also includes all the procedures to protect personnel and asserts during the disaster.

BCP: A plan contains all procedures required for running the systems and keep them online at the DR site with the max. available capacity can be used there. In case for non-dedicated DR site, BCP may also include the required procedures about how to run both recovered original system and production system at the DR site side by side with and interference. It also includes all the steps and procedures required to fail the systems back to the original site after recovering from the disaster.

Steps of Creating DRPs & BCPs:

1-) Management Buy-in: Management should agree on costs of DRP & BCP required. It includes software required for replication, HW required and any other facility. All levels of management should participate in developing DRP & BCP, testing them and executing DRP and BCP when required.

2-) Performing Business Impact Analysis (BIA): This includes:

a-) Identify Key Assets: Determining the most important items to be protected, like: software, user data, blueprints and implementation documents, etc. In addition, it’s important to identify the critical business functions and map them to the key assets identified map how these critical functions depend on each other  and on key assets for continuity of the business.

b-) Define Loss Criteria: Defining the impact of losing any of the business key assets to define the priority of these assets to the business.

c-) Define Maximum Tolerated Downtime (MTD): MTD is the max. downtime of any key asset after which a major damage to the business will occur and business continuity can’t be maintained. MTD is defined as the following categories:

                                                                                i-) Critical: minutes. to hours downtime.

                                                                                ii-) Urgent: hours to 1 day downtime.

                                                                                iii-) Important: within 3 days downtime.

                                                                                iv-) Normal: up to 14 days downtime.

                                                                                v-) Non-important: up to 30 days or more downtime.

3-) Define RPO (Recovery Point Objective): Defining RPO indicates how much data loss the business can tolerate, measured in time, for example: RPO is 1 hr means that data must be restored to its original state 1 hr before the disaster. Data not covered by the RPO are lost forever. For the previous example, the data within the last hr before the disaster is lost forever and can be tolerated by the business.

4-) Define RTO (Recovery Time Objective): Defining RTO indicates how much downtime the business can tolerate with major damage.

5-) Perform Risk Assessment: Defining all available risks around the business, their possibility and the possible impact to avoid them. Risk Assessment should include all natural and man-made risks.

6-) Examine Regulatory Compliance: Always check for any legal requirements that DRP and BCP should fulfill.

7-) Develop DRP: By completing all the previous steps, all the required analysis is done and DRP can be developed correctly. DRP should contain all the pre-defined RPO/RTO, the key assets to protect and

the procedure to bring all critical systems back online.

8-) Design DR Systems: This includes choosing the DR site and if it’ll be Dedicated/Non-dedicated and Hot/Warm/Cold. It also includes designing storage replication system with planned backups and network required for replication and operation of the DR system in case of a disaster. It also includes all the hardware/software required for failover of the main site.

9-) Creating Run-books: Run-book is a document contain all the required steps and procedures to fail the system over to the DR site in case of a disaster. It includes step-by-step guide for re-building the system from scratch, reloading the critical applications and user data. Re-operate the applications for users to continue to work. Each DR site should contain its own run-books, each for certain key asset and ordered to be used in specific order based on DRP and RPO & RTO of each key asset. Any run-book should take into consideration the difference between the main site and the DR site in configurations and facilities. Run-book is hard to be maintained as systems and applications used are fast changing as well as their dependencies which will affect their restoring techniques and restoring order.

10-) Develop BCP: BCP should contain all the required steps for maintaining systems and applications daily operations at the DR site, such as: daily backups. It also should include the detailed solution of all expected problems -resulted of lack of some resources and facilities at the DR site- as well as the detailed procedures to fail all systems and applications back to the main site after recovery from the disaster.

11-) Test DRP and BCP: DRP & BCP should be tested frequently to show any problems with them. It must be done carefully in order not to disrupt production systems, specially in case of non-dedicated DR site.


Share the Knowledge ...


**Update Log:

**08/11/2014: Update Dedicated vs. Non-dedicated DR Sites Comparison.

Hi all

A few days ago, I finished a project at one of my customers, a medium-size one. This project was about migrating his Virtual Infrastructure in his Manufacturing Plant site from vSphere 4.0 (BN 721907) to vSphere 5.x.

Fortunately, I came up with a plan to migrate all of his Infrastructure online -although he was planning for a downtime- and it went successfully. So, why not to share it with you all ..?!


Infrastructure consists of 3 HP hosts with aggregate memory of 32GB of RAM and aggregate CPU of 6 sockets with 4 cores each. In addition, FC SAN was used as back-end storage of 2 TBs. Single cluster with HA/DRS enabled and Tier 2 applications hosted as well as single critical Document Server.


  • Migrate all of Virtual Infrastructure from vSphere 4.0 (BN 721907) to vSphere 5.x.
  • Migration to be in weekends (he thought that migration will be offline).
  • Removing old vCenter Server and Build new fresh vCenter 5.x Server Windows VM.
  • Document Server must be online ASAP even at the cost of all other VMs to be offline.

My Initial Plan and Decisions:

Although the environment seemed to be really small, this customer was really worried. That Document Server was really critical for him and he had a bad experience with similar migration process done in HQ.

I decided to plan for making it online during working hours for a couple of reasons:

  • Environment was really small, no need for shutting down all services without need.
  • Customer has Enterprise License which gave me many features to use and why not using it..??
  • I worried about shutting down that Document Server and powering it up (It was legacy document server).
  • No network advanced configuration used, i.e. simple Standard vSwitches without any special configuration.
  • No worries about upgrading VMFS datastores used from VMFS 3 to VMFS 5 online. Any drawback in upgraded VMFS datastore -regarding preserving block size- will not affect future business.

I began to plan for migration and make conceptual migration on papers. One thing made things little easier was that he'll burn down his old vCenter which was physical machine, so no need for P2V phase. Another thing was that he used simple network configuration as mentioned before. I decided that the plan will be something like:

  • Upgrading certain host and build new vCenter on it to build new DRS cluster.
  • Leveraging vMotion and DRS capabilities for migrating all VMs from old host to new one and then upgrade all old hosts one by one.
  • Upgrading datastores used to VMFS 5 online.
  • VMs HW Level and VMware tools to be upgraded according regular Maintenance Windows.
  • After total migration is finished, decompose old vCenter Server.

I chose to upgrade to vSphere 5.1 U2, as I'm really familiar to it and customer doesn't need any features of vSphere 5.5.

Plan Phases:

Phase 1: Preparation Phase:

In this phase, all software needed is downloaded and moved to the far Plant. All VMs are backed up for purposes of safety of any unknown sudden failures or circumstances.

Phase 2: vSphere 5.1 Host Deployment:

For simplicity, let's name the hosts: Host 1,2 and 3.

1-) on old vSphere 4 Cluster, I reviewed and recorded any network or storage configuration needed before beginning and there was nothing special.

2-) I started by choosing Host 1 and making it into Maintenance Mode, so all VMs are surely migrated by DRS to the other hosts.

3-) After evacuating the host, I removed Host 1 from vCenter 4 Cluster and rebooted it into vSphere 5.1 U2 installation.

4-) After installing vSphere 5.1 U2 on it, I re-configured Host 1 for networking and storage. Now, Host 1 has similar portgroups as old Host 2 and 3 and attached to the same datastores as well as vMotion is enabled.

5-) I created a new VM with hardware level 9 for the new vCenter Sever. On this VM, I installed Windows 2k8 r2, updated it and antivirus agent is installed following Customer's Policy.

6-) On that VM, I installed vCenter Server 5.1 U2 with embedded DB as there're no need for further expansions beyond its limit.

7-) After successful installation, I created new cluster with HA/DRS enabled.

8-) I re-added online all hosts (1, 2, 3) to the new Cluster (It through a false warning about removing hosts from old vCenter, but nothing to worry about ).

9-) I re-balanced the cluster using DRS to make sure that every VM was working perfectly till now and migrations of VM were smooth between these different hosts. Fortunately everything was just fine !!

Phase 3: Upgrading the Remaining Hosts:

1-) I make Host 2 it into Maintenance Mode, so all VMs are surely migrated by DRS to the other hosts.

2-) After evacuating the host, I removed Host 2 from vCenter 5.1 Cluster and rebooted it into vSphere 5.1 U2 installation.

3-) After installing vSphere 5.1 U2 on it, I re-configured Host 2 for networking and storage. Now, Host 1 has similar portgroups as old Host 2 and 3 and attached to the same datastores as well as vMotion is enabled.

4-) I re-added Host 2 to vCenter 5.1 and the new cluster.

5-) I re-balanced the cluster using DRS to make sure that every VM was working perfectly till now and migrations of VM were smooth.

6-) I repeated steps from 1-5 again with Host 3.

7-) I made sure that all Virtual infrastructure were properly licensed with the new license.

Phase 4: Storage Upgrade:

I upgraded the two datastores online from VMFS 3 to VMFS 5 by just one click. I made sure that everything till now was working fine.

Now the only thing remaining was to upgrade VMware tools and VM HW Level of VMs. Customer states that he'd do it regularly during his Maintenance Windows.

It took me only 2 days to finish that project and customer was above the clouds .

Subsidiary Notes:

1-) In case of using Distributed Switches version 4, I managed to test that in my home lab and I found that the best way is to create temp Standard vSwitches, add VMs to them to make vMotion really easy and then migrate all VMs netowkring to a new Distributed Switch 5.1. The reason is that, I discovered that vMotion operation can be done if only VMKernel ports of source and destination hosts on the same LAN segment. VMKernel portgroups don't affect vMotion, i.e. if they're differently named, it doesn't prevent vMotion. Unfortunately, vMotion requires VM portgroups to be identical on the source and destination hosts, hence Distributed switch can't be used in that migration plan, as you can't create a host to two Distributed Switches, then create two portgroups on these distributed switches with the same name on the same host. For more information read the following article by Chris Wahl:

This may introduce some limited downtime.

2-) In case you need to know the difference between upgraded VMFS 5 datastores and newly-created ones, refer to the following article by Jason Boche:

If you want to re-create your VMFS datastores on VMFS 5, you should have at least two shared datastores and Enterprise License. Use Storage vMotion to move VMs from one datatsore to the other one, re-format the empty datastore then re-create it using vCenter 5.1 or ESXi Hosts 5.1. Repeate that for the other datastores till you finish. Also, keep in mind SCSI Reservation Issue -in case no VAAI available- that VMs on a single datastore should not be high to cause SCSI Reservation Conflicts (Usually 10-15 VMs per datatsore).

Waiting for your feedback .

Share the Knowledge .

Hi All .

Today, I just got time to begin surfing into VMware Horizon View 6.0 -released last June-. I began with its Release Notes which can be found here.

I write this post to summarize both Pros and Cons of this new release. I'll keep up posting about it once I finished testing it.



  • One of the most important is that Windows 2012 is now supported on many features in View 6.0. Windows 2008 R2 and 2012 R2 are now supported as OSs for Conneciton, Security and Composer Servers (Windows 2012 R2 Std. only). RDS servers to be managed with View 6.0 are supported up to Windows 2012 R2. For Single-user desktop (normal virtual desktop) only Windows 2008 R2 SP1 is supported. Also, AD Domain Services is supported on any Domain Functional Level up to Windows 2012 R2 Level.
  • Another important one that Windows 8 & 8.1 are now supported. I still remember when I lost a customer that wanted a VDI solution to support Windows 8.1 and I said "Sorry I can't ". Windows 8 & 8.1 can be used as OSs for virtual desktops and Virtual Disk Reclamation feature is now supported on Windows 8 & 8.1, but only if vSphere 5.5 is used as Infrastructure.
  • Persona Management is now can be used on Windows 8.1 or 2008 R2 Sp1 desktops. For standalone desktops, Persona Management is available as .exe file to be installed.
  • Remote Experience Agent -that includes HTML Access, Unity Touch, Real-Time Audio-Video, and Windows 7 Multimedia Redirection- is now integrated in View 6.0 Agent. No need for installing two components as in View 5.3.
  • More Integration with VMware vSAN product for further improvement in performance of View 6.0 with vSAN. vSphere 5.5U1 is required.
  • More Integration with Cloud Architecture, giving a support for 20k Virtual Desktops in 4 Pods.
  • View 6.0. is widely supprted on wide range of vSphere and vCenter editions as shown below:

vSphere Compitability.jpg


  • For Single-user desktop (normal virtual desktop), Windows 2012 and 2012 R2 aren't supported.
  • Although Windows 8 & 8.1 are fully supported on this version, many issues still occur while using View 6.0 with Windows 8 or 8.1. Check Know Issues Section.
  • Local Mode feature is now deprecated in this version and I don't know why. VMware stated reason was that there's no need for Local Mode feature while VMware Fusion and Player Plus which integrate with VMware Horizon Mirage and give same features of checking out/in virtual desktops and works offline on a virtual desktop locally. I didn't test Horizon Mirage before and hence can't give a confirmation or deny about it but I tested Horizon View Local Mode and it was awesome. It really facilitate operations for remote users with their virtual desktops. VMware also stated that customers with View 5.3 and using Local Mode will still have technical support till 2017 at least (Good notice from VMware ).
  • Many Issues with None workaround. Check Know Issues Section.


I hope you find this summary useful. I'm working now on testing it in my lab and will keep you updated with any new news .

Share the knowledge ...

ShadyMalatawey Hot Shot


Posted by ShadyMalatawey Jun 19, 2014

Hi All ...
This is our last part in the series. It's just a subsidiary notes that might be helpful. 


  • jmattson (Don’t know the actual name)
  • William Lam

Now, let's start..

1. Building Nested Virtualization Environment in Testing Labs:

   Nested Virtualization environments are commonly used in testing labs in order to practice new features and products. It’s highly recommended not to use this technique in production unless urgently needed. The      following guide by jmattson (Don’t know the actual name) explains how to run nested virtualization environment using different virtualization platforms layers combined:



2. Faking a SSD Device on ESXi 5.x:

Some vSphere 5.x features -like: Host Cache- have to be run on SSD device. The following nice trick by William Lam makes an ESXi host detect normal storage device (local disk or LUN) as a SSD storage. Check the following article:

Keep in mind that, the previous trick shouldn’t be used in production environment. Only in testing labs.



3. Linux Files & Folders Permissions:

In Linux-based OS’s, every single file or folder has a permission set like:


Permission Set



d --- --- ---

d: Directory.

---: Read Write Exec. / 4 2 1-> User.

---: Read Write Exec. / 4 2 1-> Group.

---: Read Write Exec. / 4 2 1-> Everyone.


- --- --- ---

-: File.

---: Read Write Exec. / 4 2 1-> User.

---: Read Write Exec. / 4 2 1-> Group.

---: Read Write Exec. / 4 2 1-> Everyone.

To change any permission set, use:

   chmod Sum_of_Numbers_for_Permission_SetFolder_or_File_Path/Folder_or_File_Name

   Ex: chmod 661 ‘linux.xml’ -> linux.xml will have R/W permissions for current User, Group of the current User and Everyone else will have only Read permissions.



4. VMware PowerCLI 5 Cmdlets Concepts:

   Each Cmdlet uses consistent naming format of (verb-noun). Verb for doing an action and Noun of the object will be operated on.

   Each Cmdlet uses parameters and arguments. Parameter is to control the behavior of the Cmdlet and argument is the data value itself consumed by Cmdlet.

   Naming format is (verb-noun -parameter1 argument1 -parameter2 argument2 -argument3…).

   The following table is summarizing general categories of Cmdlets:

Cmdlets GroupsUsage
Get- CmdletsCollect information about an object.
Move- CmdletsMoving objects between containers.
New-\Remove- CmdletsCreating\removing objects.
Set- CmdletsSetting certain options on an object.
Start-\Stop-\Suspend- CmdletsStart\Stop\Suspend actions.

  Get-Help CMDLET_to_Know -full: is used for get help with certain CMDLET.

  Cmdlet1 | Cmdlet2 | Cmdlet3 |..: is used when pipelining, i.e. when you use the output of Cmdlet1 as an input for Cmdlet2, then the output of that as an input for Cmdlet3, etc..


Share the knowledge...

Previous: vSphere 5.x Notes & Tips - Part XXIV:

Hi All ...
In the last part of our series, we'll go through vSphere CLI. A powerful tool that helps with many advanced configurations can't be done with regular vSphere Java Client or vSphere Web Client.

vSphere CLI is on two levels: either using ESXi Shell and SSH or using vSphere CLI which is installed separately. The following notes are including both levels. For more information, visit the following web portal of VMware:


  • Paul Grevink

Now, Let's Start...

1. ESXCLI Full Tree:

This nice blog post by Paul Grevink is nice starter guide for using ESXCLI command tree which is available through either ESXi Shell, SSH or vCLI. It introduces ESXCLI command and also summarize all the subsidiary trees in a nice diagram:



2. Some Scattered Commands:

The following table explains and summarizes some dozens of commands used through CLI (Items in BLUE are user input):


General Description




Managing multi-pathing options on ESXi Host.

-b -d Device_ID

Listing all available paths to the device Device_ID.


Managing Vmkernel ports of ESXi Host.

-a -i IPv4_Address -n Netmask -p Portgroup_Name

Adding a new Vmkernel port with IPv4_Address, Netmask & inside a vSS in Portgroup_Name portgroup.

-d -s vDS_Name -v Port_ID

Deleting Vmkernel port from vDS_Name from port Port_ID.


Listing all Vmkernel ports on this ESXi host.


Managing ESXi host from different aspects.

storage core adapter list

Listing all storage adapters on this host.

storage core adapter rescan --all

Rescanning on all storage adapters on this host for any new added storage.

storage core device list

Listing all available devices connected to that host.

storage core device vaai status get

Listing the status of all datastores VAAI.

storage core claimrule list

Listing all Claim Rules available on that host.

storage core claimrule add –r Rule_ID –t Type….

Adding a new Claim Rule with Rule_ID and Type and additional parameters (requires [storage core claimrule load] after).

storage core claimrule load

Loading newly added Claim Rules into VMKernel.

storage core claimrule run

Running newly added Claim Rules.

storage core claimrule remove –r Rule_ID

Remove the Claim Rule with certain Rule_ID (requires [storage core claimrule load] after).

storage core claiming reclaim –d Device_ID

Rescanning ESXi’s adapters connected to device Device_ID for newly applied Claim Rules.

storage core claiming unclaim –t Type….

Unclaiming for resetting and reclaiming according to newly added Claim Rule.

storage core path list –d Device_ID

Listing all paths available to the device Device_ID.

storage filesystem unmounts –l Datastore_Name

Unmounting datastore Datastore_Name from the host even if some VMs reside on it but powered-off.

storage nmp satp rule add –t Type…. –o=Option

Adding new SATP Rule with Type, additional parameters and certain option (like enable_ssd).

storage nmp satp rule list

Listing all installed and loaded SATP rules on ESXi Host.

storage nmp satp rule remove –t Type

Removing an added SATP Rule with Type and additional paramters.

storage nmp satp set –s SATP_Rule –P PSP_Rule

Setting certain default PSP_Rule for certain SATP_Rule.

system module get –m Module_Name

Getting all information about Module_Name module (driver) loaded.

system module list

Listing all modules (drivers) loaded within ESXi.

system settings advanced list

Listing all advanced settings descriptions and values on this host.

system settings advanced list –o Advanced_Setting_Path(/Branch/Adv_Option_Name)

Listing certain advanced setting’s description and value on this host.

system snmp –e ‘0/1 or true/false or yes/no’ -c SNMP_Target_Community –t Target_Name_or_IP@UDP_Port/Community

Enabling and setting SNMP v1 for a Host to a target Target_Name_or_IP on port UDP_Port with community SNMP_Target_Community (overridden by –t /Community ).

system snmp get

Listing all SNMP configured settings.

system snmp test

Sending a test SNMP trap.

system syslog config get

Getting all Network Syslog Collector settings configured.

system syslog config set --loghost=Collector_Name_or_IPv4 --logdir-unique=1

Configuring Remote Syslog Collector with Collector_Name_or_IPv4 and configuring that each ESXi host will log to unique directory with its Hostname.

system syslog reload

Reloading all new configured Network Syslog Collector settings.

system syslog mark –s=Message

Sending test message (Message) to the Network Syslog Collector to be logged as a test.


Monitoring tool for performance metrics of ESXi hosts

-b –a –d Duration_in_Secs –n Iterations_No.

Batch Mode: Recording all performance metrics of ESXi host every Duration_in_Secs for Iterations_No. number of times, i.e. that will give me for every metric Iteration_No. of records.


Managing VMs disks and datastores FS.

-c Size/vmfs/volumes/Datastore_Hashed_Name/VM_Directory/Disk_Name.vmdk’ -a ‘Adapter_Type’ -d ‘Disk_Type

Creating a Disk with certain size, name, adapter type and disk type.

-U ‘/vmfs/volumes/Datastore_Hashed_Name/VM_Directory/Disk_Name.vmdk

Deleting certain disk.


Performance monitoring tool on all VMS’ disk.


Listing all available VMs (Worlds) and disks (Handles).


Starting monitoring on all VMs (Worlds) and disks (Handles).

-s -w World_ID -i Handle_ID

Starting monitoring on certain VM (World) and disk (Handle).

-c > ‘Path/Name.csv

Creating Name.csv file in Path of all the outputs.

-p ‘Histo_Type

Drawing the Histo Type selected for states collected from –s command.

Histo Types are: (all, ioLength, seekDistance, outstandingIOs, latency, interarrival)


Stopping monitoring on all VMs (Worlds) and disks (Handles).

-x –w World_ID –i Handle_ID

Stopping monitoring on certain VM (World) and disk (Handle).


Share the knowledge ...

Previous: vSphere 5.x Notes & Tips - Part XXIII:

Next: Appendix:

Hi All ...
In the twenty third part of our series, we'll go through vSphere Management Assistant. vSphere Management Assistant is a Virtual Appliance released by VMware to help VI Administrators to connect and preform advanced configuration using it through either SSH or VMware CLI. For more information, visit the following portal of VMware:

Now, Let's Start...

1. Different Authentication Policies Actions on vMA:

AD-Auth policy is used for authenticating vMA using AD. By defining AD users as Administrators on vSphere environment, vMA can be used to manage all hosts and vCenter server by adding all of them to vMA and then connect vMA to vCenter server to manage all hosts. This can’t be easily done using FP-Auth policy as you must add the same Admin user on vCenter and all hosts in the environment, then add them to vMA.

Adding vCenter is useful only for managing all hosts using the same command prompt with changing (--server) variable.



2. Configuring and using vMA with AD Authentication Policy:

1-) After adding required targets to vMA and setting their authentication policy to AD Authentication, logout from (Vi-admin) user and log into vMA with AD user that have Administrator permissions.

2-) Connect to the desired target using vifptarget –s Server_Name.

   3-) Use vMA to perform the required tasks.

Share the knowledge ...

Previous: vSphere 5.x Notes & Tips - Part XXII:

Next: vSphere 5.x Notes & Tips - Part XXIV: