MJMSRI's Posts

I am having the same issue. HPE host was on ESXi 5.5 In place upgrade to 6.5 with HPE ISO ESXi Now trying to upgrade to ESXi 6.7 via CLI as no internet access for host and no vCenter Running ... See more...
I am having the same issue. HPE host was on ESXi 5.5 In place upgrade to 6.5 with HPE ISO ESXi Now trying to upgrade to ESXi 6.7 via CLI as no internet access for host and no vCenter Running this command getting this error: esxcli software profile update -p "(Updated) HPE-ESXi-6.5.0-Update3-iso-Gen9plus-650.U3.10.5.5.16" -d /vmfs/volumes/5b196996-e129f2d8- 30e7-ecb1d7b298a0/ISOs/esxi.zip [NoMatchError] No image profile found with name '\(Updated\) HPE-ESXi-6.5.0-Update3-iso-Gen9plus-650.U3.10.5.5.16' id = \(Updated\) HPE-ESXi-6.5.0-Update3-iso-Gen9plus-650.U3.10.5.5.16 Please refer to the log file for more details. I have tried the above with " ' and no spaces around the Updated name however none work. Run this command and this confirms the profile name: esxcli software profile get (Updated) HPE-ESXi-6.5.0-Update3-iso-Gen9plus-650.U3.10.5.5.16    Name: (Updated) HPE-ESXi-6.5.0-Update3-iso-Gen9plus-650.U3.10.5.5.16    Vendor: Hewlett Packard Enterprise    Creation Time: 2020-07-21T15:45:54    Modification Time: 2020-07-21T15:45:54    Stateless Ready: False    Description:       ----------       2020-07-21T12:23:21.047288+00:00: The following VIBs are       installed:         nvme  1.2.2.28-2vmw.650.3.129.16389870         vsan  6.5.0-3.129.16389871         vsanhealth    6.5.0-3.129.16389873         esx-tboot     6.5.0-3.129.16389870         esx-base      6.5.0-3.129.16389870       ----------       2020-07-21T12:21:51.106638+00:00: The following VIBs are       installed:         scsi-hpsa     6.0.0.84-1vmw.650.0.0.4564106         elxnet        11.1.91.0-1vmw.650.0.0.4564106         vsanhealth    6.5.0-3.126.15965596         scsi-mpt2sas  19.00.00.00-1vmw.650.0.0.4564106         vsan  6.5.0-3.126.15965595         lpfc  11.4.33.25-14vmw.650.3.96.13932383         qlnativefc    2.1.73.0-5vmw.650.3.96.13932383         lsi-mr3       7.708.07.00-3vmw.650.3.96.13932383         brcmfcoe      11.4.1078.25-14vmw.650.3.96.13932383         esx-tboot     6.5.0-3.126.16207673         esx-base      6.5.0-3.126.16207673       ----------       2020-07-21T11:44:32.510232+00:00: Host is upgraded with       following VIBs from original image profile (Updated)       ESXi-5.5U2-2069112-RollupISO-standard:         net-tg3       3.136h.v55.1-1OEM.550.0.0.1331820         scsi-celerity16fc     1.06-1OEM.550.0.0.1331820 Any ideas how to upgrade via CLI from 6.5 to 6.7??
Hi All Looking to see if the approach below is the best plan for a migration. Currently a 8 node vSAN 6.5 Stretched cluster is in place across 2 sites with windows vCenter 6.5 The plan is to ... See more...
Hi All Looking to see if the approach below is the best plan for a migration. Currently a 8 node vSAN 6.5 Stretched cluster is in place across 2 sites with windows vCenter 6.5 The plan is to purchase new hardware (8 new Hosts) then fit them into each site, install ESXi 7.0 onto them and then deploy a new VCSA 7.0, enable vSAN, setup fault domains etc. So at this point the old and new vSAN will be sat side by side. The VMs will then need to come over from the old vSAN to new vSAN. Am I correct that we just need to ensure the VMware Management and vMotion networks can communicate from old cluster to new cluster so we can then perform vMotion and Storage vMotion of the machines from old to new cluster? So we wont need both vCenters to be in the same SSO Domain or need any communication from old vSAN Layer2 Network to the new vSAN Cluster Network? Thanks
Hi, this Article below details some good examples of upgrades that should help you plan this. VMware Knowledge Base it sounds like you are using windows vcenter, so I would use this projec... See more...
Hi, this Article below details some good examples of upgrades that should help you plan this. VMware Knowledge Base it sounds like you are using windows vcenter, so I would use this project to migrate and upgrade at the same time to VCSA 6.7: Migrating vCenter Server for Windows to vCenter Server Appliance there is no upgrade path from your SRM version to the latest SRM release and that will also be on a windows VM so I would look to export the config, deploy a new SRM Appliance and import the config. for compatability with any vCenter 6.7 release you would need to have SRM version 8.1 as a minimum. Also don't forget to check your Dell arrays compatibility as both sites should be on the same version of vSphere. looking at this it looks like all are supported to vSphere 7.0 so 6.7 should be ok VMware Compatibility Guide - Storage/SAN Search
With that plan i would configure a new Cluster in vCenter for the Gen10 Hosts, when you create the new cluster, add all the Gen10 hosts then before any VM's are on the cluster, enable EVC and set... See more...
With that plan i would configure a new Cluster in vCenter for the Gen10 Hosts, when you create the new cluster, add all the Gen10 hosts then before any VM's are on the cluster, enable EVC and set to highest available setting for Intel. Then also setup HA, DRS etc to best practice. Then when you have time you can vMotion VM's from the Gen9 Cluster to Gen10 Cluster. Once complete the hosts in Gen9 cluster can be decommissioned and that cluster removed.
Hi for your infrastructure its best practice to run the same version of vSphere at both sites as if failing over from 6.7 to 6.5 the VMware tools will be different and then when failing back the ... See more...
Hi for your infrastructure its best practice to run the same version of vSphere at both sites as if failing over from 6.7 to 6.5 the VMware tools will be different and then when failing back the VMs will go from 6.5 to 6.7 so best to have 6.7 at both sites. I see the comparability with the storage is why its 6.5, so best to replace that storage so your on the same at both sites. if thats not an option then at least test a VM failover from 6.7 to 6.5 and then failback to make sure there are no issues. For replication it looks like you are aiming for Traffic Isolation for the replication. If so then create a new vss/vds and port group for replication, then create a vmkernel port for replication service on each host / vds then on each Replication appliance add an additional interface so then each appliance will have 2 interfaces, one for management and one for replication. in the replication interface set a static ip via the vami for replication appliance (:5480 login) then once thats set go back to the configuration page and set the same IP Address in the "IP Address for Incoming Storage Traffic" that will then set the segregation.
Before adding the Gen10 Hosts to the cluster, select Enable EVC from the cluster then select the desired Intel EVC Level, this will pre-check against the hosts and advise which is and isnt compat... See more...
Before adding the Gen10 Hosts to the cluster, select Enable EVC from the cluster then select the desired Intel EVC Level, this will pre-check against the hosts and advise which is and isnt compatible, you can then see which level can be set and if they are all Gen9 hosts they will be using just the features available from the host CPU. If you added Gen10 hosts into Gen9 cluster and then wanted to enable EVC this is more complex as then some machines could be using the newer gen10 cpu features in which case VM's powered off would be safest as the EVC mode set would need to be aligned to the lowest CPU feature set which would be the Gen9's. So all of the newer features in the Gen10 cpu would be masked.
Thanks depping for the reply, so with this approach i would setup the new 6 hosts across the both sites and a new vCenter. Would i then add the new vCenter to the same SSO Domain vSphere.local so... See more...
Thanks depping for the reply, so with this approach i would setup the new 6 hosts across the both sites and a new vCenter. Would i then add the new vCenter to the same SSO Domain vSphere.local so that ELM is in place to do the storage vMotion?
Hi All, i have a project to upgrade and migrate vSAN to new Hosts. Current infrastructure is: 3 Sites, vSAN Stretched Cluster Site 1 has ESXi 6.0 Host and virtual vSAN Witness appliance ... See more...
Hi All, i have a project to upgrade and migrate vSAN to new Hosts. Current infrastructure is: 3 Sites, vSAN Stretched Cluster Site 1 has ESXi 6.0 Host and virtual vSAN Witness appliance Site 2 has 3 x ESXi vSAN 6.0 Hosts Site 3 has 3 x ESXi vSAN 6.0 Hosts 1 x Windows Based vCenter Server within vSAN Cluster on Site 2 The project is to upgrade to vSphere 6.7 so thats ESXi 6.7, vSAN 6.7 and VCSA 6.7 aswell as replace the 6 x vSAN Hosts due to age and migrate to VCSA from Windows. My plan is to do the below, do you believe this is the best approach? Perform a vCenter Migration with the Migration Utility on VCSA ISO from windows to a new VCSA within the existing Cluster. So this will keep all the existing config like vSAN Configuration, vDS networking, etc. Install ESXi 6.7 on 6 x New hosts, align to VCG for vSAN, Firmware, etc Add Hosts to the existing vSAN Cluster Contribute 6 x New Hosts Storage to vSAN Cluster Enter old vSAN Host 1 from site 1 into maintenance mode and select "Full Data Migration", once complete remove host from cluster once above complete enter old vSAN Host 1 from site 2 into maintenance mode and select "Full Data Migration", once complete remove host from cluster Repeat the above for old vSAN Host 2 and 3 from each site, once complete remove host from cluster which will result in just the 6 new hosts within the Cluster Thanks
One option could be to setup VMware Fault Tolerance and then enable on that Virtual Machine so its protected to another ESXi Host, then there will be protection for when you restart the source ho... See more...
One option could be to setup VMware Fault Tolerance and then enable on that Virtual Machine so its protected to another ESXi Host, then there will be protection for when you restart the source host for Upgrade. You would just need to follow setup for Fault Tolerance and could get a trial licence if you dont have the ESXi licence to cover this feature
Hi All, i have usually deployed Replication with SRM and some standalone vSphere Replication setups but all have had a vCenter at both sites. However i have a scenario whereby there is only to... See more...
Hi All, i have usually deployed Replication with SRM and some standalone vSphere Replication setups but all have had a vCenter at both sites. However i have a scenario whereby there is only to be 1 vCenter in Production and no vCenter in DR. So in both sites there will be ESXi Hosts and different shared storage at each site. My Query is regarding the disadvantages of having just 1 vCenter, so what are the main drawbacks of this? Also cant see any best practice to detail that 1 VC at each site is recommended? And details on why..? My understanding is that by only having 1 VC this will slow down the fail over process in a scenario where the VC has failed as you would then need to restore the VC then failover machines etc? With this topology, there will to my understanding be: 1 x vCenter in Production 1 x vSphere Replication Appliance (Management Server) in Production 1 x vSphere Replication Server in Production 1 x vSphere Replication Server in DR So a total of 4 x VMware Appliances.
I have created a new test cluster in vCenter and just enabled DRS initially, once created i then enabled HA then enabled Proactive HA and under the Providers tab the HPE Provider was listed and i... See more...
I have created a new test cluster in vCenter and just enabled DRS initially, once created i then enabled HA then enabled Proactive HA and under the Providers tab the HPE Provider was listed and i have been able to select this. So on a new cluster with no hosts, the provider appears. So this indicates a host issue/incompatibility. As vSAN is enabled, its not an easy task to simply move the hosts from the current cluster to new cluster and even if i did the feature may then disable due to hosts being added which could bring over the incompatibility if it exists.
Hi, I am looking to enable Proactive HA. As we have HPE Hardware, new DL380 Gen10, i have downloaded the free HPE One View for vCenter 9.3 and deployed this into the cluster. Further informat... See more...
Hi, I am looking to enable Proactive HA. As we have HPE Hardware, new DL380 Gen10, i have downloaded the free HPE One View for vCenter 9.3 and deployed this into the cluster. Further information below: Entire infrastructure is new, DL380 Gen10, vCenter 6.7u1, ESXi 6.7u1 and vSAN enabled with 6.7u1. Hosts are all identical with firmware and drivers All hosts installed with HPE customized ESXi 6.7u1 image Checked and verified that all hosts have the latest Offline HPE bundle installed. Deployed HPE ov4vc 9.3 into cluster, registered vCenter and connected successfully. Verified in H5 and Flash Client that the HPE OV plugin installed and in good health Setup iLO and ESXi credentials in oneview 4 vcenter so it has access to these. Hosts have iLO advanced applied. After all of this there are new tabs in vCenter under monitor and configure for HPE server hardware however no data is being recieved on any of the tabs. I have restarted the hosts, vCenter and ov4vc appliance aswell as selected "Refresh" on vCenter in ov4vc console. ov4vc, hosts and vcenter all on same subnet and VLAN so no issues with ports, routing etc. I have right clicked the cluster to enable Proactive HA and can enable however no providers appear under the providers section. HPE Should appear there. Any ideas?
Not a very good test? But then you advised to use the same test and simply changed the command slightly from 9000 to 8972 and removed -c which is for count so the command runs for a set amount.
Hi All hosts will need to have a VMkernel port configured that has the vSAN Service Enabled. Sounds like you have done this. Next best steps are: Set a static ip address and subnet for all ho... See more...
Hi All hosts will need to have a VMkernel port configured that has the vSAN Service Enabled. Sounds like you have done this. Next best steps are: Set a static ip address and subnet for all hosts vmkernel ports that is on the same L2 network. check the VLAN ID on the virtual Switch and ensure the VLAN is set the same Check the physical switches to ensure the connections into these are all enabled, not shutdown and have the same access port VLAN ID Set If the above is all in place, then open Putty to a host and VMKPING to one of the other Hosts vSAN VMKernel port to test if there is connectivity: vmkping -I VMK1 10.10.10.10 -s 9000 -c 100 (change the VMK to the VMKernel port that vSAN is assigned. VMK0 will be for management. then specify the destination hosts vSAN Static IP)
Hi Bob, Thanks for the reply. Unfortunately the quickstart doesnt feature a "Re-Run" option whereby if you want to make a change you can do this via Quickstart therefore, the only way to ... See more...
Hi Bob, Thanks for the reply. Unfortunately the quickstart doesnt feature a "Re-Run" option whereby if you want to make a change you can do this via Quickstart therefore, the only way to change things is by doing it manually as was always the case prior to this quickstart but then that triggers the alerts. You could silence it however i want this to be all green and valid so the only way around this was to evacuate the cluster, build a new cluster and start again. A lot of work however now know that this needs to be set all correct on day 1 and no changes can be made after that. I have a case open with GSS and talked at length with them about this and they were very keen for me to provide an email so they can review this feature and improve. Another issue with this is that i had already had VMkernel ports configured on each host prior to configuring Quickstart however when you select finish on Quickstart this deletes all VMKernel ports on hosts apart from VMK0 which is Management and then assigns vSAN as VMK1. So there are no warnings about this, or variables in the script to change other VMkernels to different numbers, it seems the script is hardcoded to use VMK1 for vSAN and delete whatever else is there. So beware of this if you have any other ports setup before running this.
Hi, Seems i have the same issue whereby i have setup with Quickstart and all was fine. I then had an issue with the dedicated vSAN vDS so created a new vDS and migrated the vSAN VMKernels ove... See more...
Hi, Seems i have the same issue whereby i have setup with Quickstart and all was fine. I then had an issue with the dedicated vSAN vDS so created a new vDS and migrated the vSAN VMKernels over and now the Quickstart is showing the below for the "Host Compliance check for Hyper Converged Cluster" common.compliant All 5 hosts have been migrated to the new vDS along with all VMKernels, etc. So this quickstart doesnt seem to be able to adapt to any changes? Is there a config file i can edit to change so this common.compliant doesnt appear?
Dont delete the WitnessPG, simply uncheck the vSAN Service from it and enable on the MGMT PG.
Looks like its on the same vSwitch as the Management Network and VMKernel, best to segregate this off to its own Switch with different vmnics. Distributed Switch included with vSAN so could creat... See more...
Looks like its on the same vSwitch as the Management Network and VMKernel, best to segregate this off to its own Switch with different vmnics. Distributed Switch included with vSAN so could create a new vDS for vSAN and then bind the adapters to that then try to configure the vSAN VMK's on their own unique IP Subnet and VLAN.
Hi All, I have looked though storage hub and the 2node/stretched cluster documents but still cannot get a firm answer to the networking on a new 2-Node Cluster. The scenario is as below: ... See more...
Hi All, I have looked though storage hub and the 2node/stretched cluster documents but still cannot get a firm answer to the networking on a new 2-Node Cluster. The scenario is as below: Everything will be hosted in one location on one site so all Layer 2 connected across same Cisco Switches. There is an existing vSphere 5.5 cluster and there will be a new vSAN 6.7 2-Node Direct Connect Cluster. They will all use the same VLAN100 for VMware Management. So all hosts in business and vCenters, etc will have same 10.10.0.0/16 subnet IP’s. Existing vSphere cluster is ESXi 5.5 and this is where the new vSAN Virtual Appliance will be hosted as a VM. New cluster will be vSAN 6.7 2-node direct connect over 2 interfaces that are 10gbE. So this will have vSAN VMKernel on it and vMotion VMKernel. Setup in Active/Standby Configuration (vSAN = vmnic4 active, vmnic5 standby. vMotion = vmnic5 active, vmnic4 standby) As this is a switchless solution for the vSAN and vMotion networks im not sure if I need to specify VLAN’s on the vSAN and vMotion Networking? Or as they will be on 2 different VMKernel Adapters that will suffice? I have made up two VLAN’s that these networks will use so these dont exist anywhere yet such as Cisco Switches, etc. vSAN = VLAN500 192.168.10.0 / 255.255.255.192 vMotion = VLAN600 172.16.10.0 /255.255.255.192 The onboard 4 x1gbE interfaces in new vSAN Hosts will be used as 2 for VMware Management and 2 for Virtual Machine Networks. As this will be vSAN 6.7 and all Layer 2 in the same site I don’t believe we will need to specify any static routes for the traffic between the 2-Node and the witness on other cluster? Part I want to clarify is the networking from the vSAN Cluster to the vSAN Witness. So I see that the Witness is deployed with 2 vNICS, vmnic0 for MGMT and vmnic1 for WitnessPG however its supported to enable vSAN Networking across the vmnic0 MGMT so think I will do that in which case this means that the vSAN Networking will communicate over the MGMT Network? If that’s the case then the VLAN500 I have made up that the vSAN will communicate only over the direct-connections and does not need to be routed any where else such as to the vSAN Appliance? Or is there more to this?