mbartle's Posts

I posted in another thread but for extra visibility, I will add it here as well ESXi 7.0.2c or d , did not fix the SD card bug for us. We run Dell FC640 Blades, the dual SD card firmware is at 1.15... See more...
I posted in another thread but for extra visibility, I will add it here as well ESXi 7.0.2c or d , did not fix the SD card bug for us. We run Dell FC640 Blades, the dual SD card firmware is at 1.15 (all other firmware is up to date).  We had been running 7.01d without a single issue. Within 24 hours of upgrading to 7.0.2c one node had a SD card die. A day later a second node had a card die I applied 7.0.2d and it did nothing . So we've rolled back once again to 7.0.1 and have ordered BOSS cards and M.2 I know many folks have had success with this, but I wanted to let people know that something is still causing cards to die. If anyone from VMware is reading this, I would be happy to provide logs to help diagnose this. TL:DR - Stay away from 7.02 if you value your free time and enjoy stable servers
I have been using SD cards with ESXi for close to a decade now.  We are finally replacing these cards in our Dell Blades with BOSS and m.2 480GB SSD devices. Is it still best practice to move the sc... See more...
I have been using SD cards with ESXi for close to a decade now.  We are finally replacing these cards in our Dell Blades with BOSS and m.2 480GB SSD devices. Is it still best practice to move the scratch logs to a SAN volume, or should I just let the installer decide what to do since we're now off SD cards and have also presented a massive boot device. I'm thinking along the lines of excessive wear to the SSD by having logs write to it so my reasoning is move the logs to SAN.  Is this correct ?
My only advice i can offer is this : DO NOT upgrade to 7.0.2 (any flavor) if you use SD cards. I've had clusters running at 7.0.1 with 16GB cards, moving scratch and coredump to disk and also offloa... See more...
My only advice i can offer is this : DO NOT upgrade to 7.0.2 (any flavor) if you use SD cards. I've had clusters running at 7.0.1 with 16GB cards, moving scratch and coredump to disk and also offloading VMware tools to RAM. Not one single issue I patch to 7.0.2c and 24 hours later 1 SD card dies on one blade. 48 hours later a second, so I am rebuilding back to 7.0.1 Unless you replace SD cards with m2 or SSD you are asking for trouble . Trust me 
Well 7.0.2d did nothing to help. Now I get to spend my Friday and Saturday rebuilding these hosts back to 7.01d until we can get BOSS cards.  Not even going to bother with my other cluster for now W... See more...
Well 7.0.2d did nothing to help. Now I get to spend my Friday and Saturday rebuilding these hosts back to 7.01d until we can get BOSS cards.  Not even going to bother with my other cluster for now What a disaster. I've been using VMware products since the 2.x days.  Really sad to see them fall this hard
second host lost a SD card 48 hours later. This patch didn't fix the problem 
Check under the 7.0.0 heading. For some reason patch 1c was there as well and not under its own category. I just built an image using Lifecycle Manager and exported it , then used it as a baseline t... See more...
Check under the 7.0.0 heading. For some reason patch 1c was there as well and not under its own category. I just built an image using Lifecycle Manager and exported it , then used it as a baseline to remediate my other cluster
https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u2d-release-notes.html
Hi Patrick The Dual SD card firmware was at 1.13 which i patched when i did the 6.7 to 7.01 upgrade.  Dell just released 1.15 which I had upgraded to before uplifting to 7.0.2c . We also patched the... See more...
Hi Patrick The Dual SD card firmware was at 1.13 which i patched when i did the 6.7 to 7.01 upgrade.  Dell just released 1.15 which I had upgraded to before uplifting to 7.0.2c . We also patched the iDRAC to 5.0 a few days ago. Every firmware component is up to date on these blades. We are looking to replace these with SSD as the BOSS are very hard to get a hold of due to the chip shortages.  I also see VMware just released 7.02d update.  I'm not holding my breath
I just patched my servers Dell FC640 Blades from 7.01 Ud to 7.0.2c .  This morning I was looking at the events in vCenter and saw this : 09/15/2021, 11:09:18 PM Device mpx.vmhba32:C0:T0:L0 has been ... See more...
I just patched my servers Dell FC640 Blades from 7.01 Ud to 7.0.2c .  This morning I was looking at the events in vCenter and saw this : 09/15/2021, 11:09:18 PM Device mpx.vmhba32:C0:T0:L0 has been removed or is permanently inaccessible. Affected datastores (if any): Unknown. Related events: There are no related events.    09/15/2021, 11:09:18 PM Permanently inaccessible device mpx.vmhba32:C0:T0:L0 has no more opens. It is now safe to unmount datastores (if any) Unknown and delete the device. Related events: There are no related events. When I check the storage devices, vmhba32 is up and running and there are no issues with the VMs.  We boot off dual SD cards (looking into replacing these).  My much older cluster has been running on 7.0.1 and has not had a single issue.  This cluster didn't have a problem either with 7.0.1 . Just noticed this 2 days after we upgraded. We have done all the mitigations : tools to ramdisk / coredump to disk / scratch to persistent disk . What vmware log can I search to see if there are more events ? The vCenter events only shows the last 100. EDIT : It looks like the iDRAC is showing I only have 1 SD card now on this node.
I have an issue that i'm not sure if this is normal or not.  We are running ESXi 7.0.1 U1d that we upgraded from 6.7 using the traditional ISO / baseline. I am wanting to bring these up to 7.0.2 U2c... See more...
I have an issue that i'm not sure if this is normal or not.  We are running ESXi 7.0.1 U1d that we upgraded from 6.7 using the traditional ISO / baseline. I am wanting to bring these up to 7.0.2 U2c using the new image based method.  When i start the process and select the image profile plus the Dell EMC addons, I get this message at the top : Identified standalone vib(s) vmware-fdm 7.0.2-18356314 belonging to vSphere FDM 7.0.2-18355786 solution component. When I save and have it validate the image against the HCL, it comes back with : The following VIBs on the host are missing from the image and will be removed from the host during remediation: vmware-fdm(7.0.2-18356314). This VIB seems to be the HA agent.  If it removes it during the remediation, will it re-install it again when the host exits maintenance mode ? How else will it be able to be a cluster member ?  
Luciano. Something does not make sense here.  I ran these in my test environment. vCenter was 7.0.2 Build 17958471 , I ran the upgrade to .400 from the appliance update section. Patched successfully... See more...
Luciano. Something does not make sense here.  I ran these in my test environment. vCenter was 7.0.2 Build 17958471 , I ran the upgrade to .400 from the appliance update section. Patched successfully and brought vCenter to Build 18356314. The build number for ESXi 7.0.2Uc is : 18426014 , which is higher than vCenter. I can't ever remember in all my years of working with these products that the ESXi will have a build higher than vCenter.  My understanding has always been vCenter must be equal to or higher in build numbers than ESXi.  I am not sure I want to patch production without some clarification here. I did the patch using the Host Security Patches and Critical Host Patches baseline in test, however the build is the same if I were to create an image for my production cluster - the vCenter will still be lower than ESXi.   Is this ok ?
I am running the Dell custom ISO : DEL-ESXi-701_17551050-A05 . My concern about using Lifecycle Manager is the Dell addons won't be upgraded.  What version were you on prior to updating via LM ?  I ... See more...
I am running the Dell custom ISO : DEL-ESXi-701_17551050-A05 . My concern about using Lifecycle Manager is the Dell addons won't be upgraded.  What version were you on prior to updating via LM ?  I am hoping Dell EMC releases a custom ISO for this, but VMware hasn't even yet. They released a 7GB ISO for vCenter 7.0.2Uc but the ESXi available to download is still 7.0.2Ua
Do we know if VMware and/or Dell will be making a custom ISO for this ? When you go to the product download page, you can see that they have a new ISO for vCenter 7.0.2 U2c, however the ISO for ESXi ... See more...
Do we know if VMware and/or Dell will be making a custom ISO for this ? When you go to the product download page, you can see that they have a new ISO for vCenter 7.0.2 U2c, however the ISO for ESXi is still 7.0.2 U2a. Ideally i'd like to get the Dell EMC customized ISO for U2c.  I know the patches are available in Lifecycle Manager but my concern is potentially having out of date Dell customized add-ons for the image. I keep checking every day and am surprised that have not released an ESXi ISO for U2c
I had to rebuild one host from scratch to 7.0 U1 Dell EMC and it contains a valid /altbootbank with files in it. The other 3 hosts were upgraded using Lifecycle manager from 6.7 U3 to 7.0 U1 (Using ... See more...
I had to rebuild one host from scratch to 7.0 U1 Dell EMC and it contains a valid /altbootbank with files in it. The other 3 hosts were upgraded using Lifecycle manager from 6.7 U3 to 7.0 U1 (Using same image i used on the rebuild). These 3 hosts have a valid altbootbank but the folder is empty on all 3 hosts.   The servers are operating fine, I can reboot and perform vMotion etc on them.  I suspect this isn't normal.  Any idea how to go about fixing this? I opened a SR but they are so slow these days i figure i'd ask here as well. My other cluster of 6.7 all have files in altbootbank. I suspect the in-place upgrade has caused this so i'm a bit hesitant to upgrade the others until i sort this out It's not an issue of the altbootbank being corrupted. It's just empty. I can create / delete files manually in that folder
Thank you. Since I only did one host. I may seriously consider rebuilding it back to 6.7 U3 and wait for further fixes to come from VMware. I have a SR open for my long boot issue. I'll see what they... See more...
Thank you. Since I only did one host. I may seriously consider rebuilding it back to 6.7 U3 and wait for further fixes to come from VMware. I have a SR open for my long boot issue. I'll see what they say but after reading all your posts, I have some concerns with 7.0.2
Hi Luciano I did not know I would have these issues or i would have not upgraded. One host was a test server and it seems to have worked ok. I took my first prod server to 7.0.2 and now 45 minutes t... See more...
Hi Luciano I did not know I would have these issues or i would have not upgraded. One host was a test server and it seems to have worked ok. I took my first prod server to 7.0.2 and now 45 minutes to boot, stuck on vmw_satp_alua loaded successfully.  Then i happened to see the issues with SD card problem. I did the HCL compatibility check and even Skyline check and never once did it say there are potential issues with SD card. I may just wait for 7.0.3 . I don't think I want to do the other 8 hosts because i really don't want to have to build them from scratch and then face potential server loss due to corrupt SD cards. it seems this version is one gigantic mess. So glad I did not apply the upgrade to the cluster and chose only to start with 1 host. 
I just upgraded 2 hosts to v7 that run Dell Dual SD cards.  I noticed a few things : 1:  As soon as I added VMFS storage to one box, it automatically moved the /scratch or .locker to the HDD on its ... See more...
I just upgraded 2 hosts to v7 that run Dell Dual SD cards.  I noticed a few things : 1:  As soon as I added VMFS storage to one box, it automatically moved the /scratch or .locker to the HDD on its own 2: My other cluster server already had scratch going to a SAN disk and retained these settings. I also followed the KB to move the Tools to RAM.  One host has been upgraded for a week without any issues at all and the other was done yesterday (but now takes 45 min to boot), so i've pulled it out of the cluster until VMware can figure out why it gets stuck for so long after loading the SATP_ALUA policy at boot. I am a bit concerned hosts will stop working so i've paused the upgrades on the rest of the servers.  If one has performed mitigations I listed above, is there still a chance the SD cards will stop working?
HI everyone.   This just happened to me this morning. I used Lifecycle Manager to upgrade one host from 6.7U3 to 7.0.2 Dell EMC and it now takes 45 minutes to boot.  It gets stuck at the exact same ... See more...
HI everyone.   This just happened to me this morning. I used Lifecycle Manager to upgrade one host from 6.7U3 to 7.0.2 Dell EMC and it now takes 45 minutes to boot.  It gets stuck at the exact same spot and also has similar events in the log files. Was there ever a fix determined for this ? I would prefer to not have to rebuild each host from scratch,  I've opened a SR and will post back with any updates.