iLikeMoney's Posts

I'm far from an expert in SQL Clustering, I frankly don't like the idea of clustering anything at the OS level when there's HA available at the host level but I have had to set it up before.  I n... See more...
I'm far from an expert in SQL Clustering, I frankly don't like the idea of clustering anything at the OS level when there's HA available at the host level but I have had to set it up before.  I never mess with RDMs due to the snapshot issue, just thick eager VMDKs.  Last time I did a MS cluster with SQL was using three Server 2012 R2 VMs, two running SQL server and a third was strictly as an iSCSI target role, the SQL db itself sat on this shared storage iSCSI target server and all VMs sitting on RAID-10.
I take that back. For grins I just tried logging into ESXi 6.5 with the 6.0 thick client which surprisingly worked (does not work with vcenter but this particular ESXi server is not managed by a ... See more...
I take that back. For grins I just tried logging into ESXi 6.5 with the 6.0 thick client which surprisingly worked (does not work with vcenter but this particular ESXi server is not managed by a vcenter) and there are all the resource pools. So they're still there, I created a new resource pool and dropped a VM into it, no issues. Next I close that out and try to access the ESXi with the flash-based web client.  It's when I get logged into here is where I lose visibility to the resource pools. Maybe there's a view to show the resource pools in the web client that I just haven't clicked on yet.
Upgraded a stand-alone 6.0 ESXi host to 6.5.0b this morning.  Put it into maintenance mode using the thick client for the last time... makes me sad. Anyway upgrade went smoothly except for th... See more...
Upgraded a stand-alone 6.0 ESXi host to 6.5.0b this morning.  Put it into maintenance mode using the thick client for the last time... makes me sad. Anyway upgrade went smoothly except for the fact that I no longer see any of the 15-20 resource pools we had defined on here.  We use them historically not so much to throttle resources but to organize VMs by user.  If we give someone a local account and define it on the resource group then those were the only VMs they had visibility to when logging in. On our vcenter we just use folders for that purpose.  So have resource pools been removed from standalone ESXi as of version 6.5?
I was just about to report back regarding direct connection to the host OVF deployment is in fact working (post-patching) now so your reply is very timely, thanks for helping.   In my case th... See more...
I was just about to report back regarding direct connection to the host OVF deployment is in fact working (post-patching) now so your reply is very timely, thanks for helping.   In my case there were still a number of hoops to jump through to get it working, one was I had to ditch IE for chrome and then install a newer version of flash on my machine (which oddly enough prompted me to reboot) and after direct connection to a host using the chrome browser I was able to deploy the OVF (finally),   IE actually complained that the OVF exceeded 4GB so Chrome was the only path forward anyway. Another thing to add is that when I deploy the OVF via direct connection to the host and the OVF first starts to deploy I notice in the task list the following: Task: Reconfig VM   Initiator : VC Internal   Result:  'Failed - the operation is not allowed in the current state'  This error is then immediately followed by other tasks:  Upload disk (which is the VMDK of the OVF)  and 'Import VApp'. Finally at this point the upload of the VMDK took far longer than it typically does which had me expecting it to fail right at the end but it was ok and booted fine. For an OVF of this size 13GB I am typically able to deploy it in 30 minutes or so, but this took nearly 5 hours to deploy.  So there's apparently some I/O issue going on here or some interference due to vcenter being in this mix but I'm happy to at least report that it finally works.  The time is problematic but as a workaround I can deploy something before I leave work every night, convert it to a template once it's on there and should be ok.   Cheers.
Patched both ESXi servers in the cluster.  I was on 6.5.0.4564106  and now using the update manager I have them at 6.5.0.5310538 Now when I attempt to deploy an OVF to see if the patching help... See more...
Patched both ESXi servers in the cluster.  I was on 6.5.0.4564106  and now using the update manager I have them at 6.5.0.5310538 Now when I attempt to deploy an OVF to see if the patching helped I receive the following message: This version of vCenter Server does not support Deploy OVF Template using this version of vSphere Web Client.  To Deploy OVF Template, login with version 6.5.0.0 of vSphere Web Client. So now in the vSphere Web Client I go to Help>About   where it shows: vSphere Web Client Version 6.5.0 Build 5178943 Is there some other rev of the web client that matches 6.5.0.0 ?    I must say it's all very interesting, and not in a good way. 
After further inspection it seems all of the VMs that failed to migrate were on LUN01, and Host B is having intermittent issues 'seeing' LUN01.  In the past I have done some action at the command... See more...
After further inspection it seems all of the VMs that failed to migrate were on LUN01, and Host B is having intermittent issues 'seeing' LUN01.  In the past I have done some action at the command line to get the signature squared away but apparently that fix did not persist. Other 20 LUNs are fine, not sure why this one causes an issue, I bought a Compellent SAN from Dell, direct-connected to the hosts on SAS hopefully Compellent doesn't end up being a mistake.
Thanks for the link.  It's good info but seems to be a different issue than what I'm seeing. What I'm going to attempt next is to install the latest patches on my ESXi servers. I will report b... See more...
Thanks for the link.  It's good info but seems to be a different issue than what I'm seeing. What I'm going to attempt next is to install the latest patches on my ESXi servers. I will report back if it helps. Finding it extremely hard to believe that OVF deployment via the web client doesn't work at all on 6.5 vcenter, I don't know if this is just something specific to my own setup or if everyone is seeing this behavior.
Simple setup, two host cluster ESXi 6.5.  I have 15-20 VMs on host A that I want to vmotion to host B in prep for ESXi patching. I would say 10 of the VMs vmotion just fine.  The other 4 or 5 ... See more...
Simple setup, two host cluster ESXi 6.5.  I have 15-20 VMs on host A that I want to vmotion to host B in prep for ESXi patching. I would say 10 of the VMs vmotion just fine.  The other 4 or 5 appeared to vmotion (no errors were generated or anything, shows as 100% completed) but afterward they remained on host A. ! In the web client I right-clicked the header row and added the column 'host' on the VM inventory, sure enough a handful of VMs still remain on host A. Double-checked for some potential causes, there are no snapshots on these VMs and no ISO mappings. Went back in and attempted to vmotion one of these stragglers for a second time, now host B is not even showing up in the dialog as an option to migrate to, it only shows 'host A' which is not helpful as it's the host it's already on. I vmotion it to host A now just for grins. Completes successfully in a millisecond. Stuck in a loop with this... any ideas?
Thanks for the kb, granted this is not the error I'm receiving nor am I using the appliance but I will investigate how it might impact my windows vcenter. Failing that, I would hope there are ... See more...
Thanks for the kb, granted this is not the error I'm receiving nor am I using the appliance but I will investigate how it might impact my windows vcenter. Failing that, I would hope there are other workarounds besides command-line deployment of OVFs. That would have to be a last resort.
I have a Win7 box with VMware Workstation 12.5 installed where I build all of the OVFs required by my team. Recently I upgraded from vSphere 6 to 6.5 and that comes of course with the pain po... See more...
I have a Win7 box with VMware Workstation 12.5 installed where I build all of the OVFs required by my team. Recently I upgraded from vSphere 6 to 6.5 and that comes of course with the pain point made mandatory -  the web client. It's not enough that everything I need to do going from thick client to web client is now scattered all over the place like going from XP to Win10 without trying to miss a beat. Something major that I need to do, deploying OVFs isn't even working anymore... First things first, I'm in vcenter and deploy the OVF after maybe 30 minutes right at the end of the deployment received an error regarding a checksum on the manifest file.  Well this is umm, your manifest file VMware created by the latest version of VMware Workstation. Hmm. So I've traversed a few roads with this OVF stuff over the years, I checked into the virtual hardware, it's a new server 2016 VM I'm trying to deploy so I wanted to make sure I didn't have anything messed up with virtual hardware versions or virtual hardware types that have been problematic in prior versions.  Nope, everything looks to be ok. Second try at deploying -- during the browse to files I didn't select the manifest file, I left it out, the import appears to be successful, no checksum errors.  Attempt to boot up the VM and receive this message: A disk read error occurred Press control + alt + del to restart Hmm, is this OVF corrupt?  So I re-export it from Workstation and redeploy it... same error. Do I have SAN issues?  Redeployed to a different datastore on a different array... same error. Is there like some incompatibility with the virtual SCSI controller on 6.5?  I am running LSI Logic SAS, should be fine.  I tried one or two others, ... same error. Upgraded the virtual hardware, ... same error. Upgraded the OVF tool VMware Workstation uses to the latest version available 4.2..re-export OVF, import, ... same error. Eventually hit google and see a bunch of stuff in the 6.5 release notes for OVF, nothing however seems to match up exactly with what I'm seeing here. Spoke with a couple of co-workers who claimed to have deployed the same OVFs that I built for them on standalone ESXi 6.5 (no vcenter) and claimed to have not experienced this issue! Now I go to one of the standalone ESXi 6 servers that I have admin rights on in another dept.  Deploy the same OVF, it deploys with no error and boots right up. Hmm, so now I shut down the VM and export it from the stand alone ESXi server, import it to my 6.5 vSphere, same error, disk read error. So that rules out Workstation but can anyone tell me what's going on with this?  I have wasted a good portion of several days on this issue and seem no further along to understanding it.
I went to 6.5 recently and I'm finally forced to feel the pain of the web client.  I avoided it for so many years and now there's no way out. The best I could do to help mitigate some of the h... See more...
I went to 6.5 recently and I'm finally forced to feel the pain of the web client.  I avoided it for so many years and now there's no way out. The best I could do to help mitigate some of the headaches is to use my VMware Workstation installation to connect to my vcenter server so at least my VMs show up in a thick interface again. It helps a bit but this is really unfortunate.  I hope VMware does the right thing and maintains several clients.  Is it that costly to maintain several client applications?  Not only that if you had to kill off one of them I would think you'd choose the least popular one.      
Thanks for the reply. No need to upload as I have it working error-free now. Checked the .vmx on the workstation-side, SATA0 present=TRUE was listed. I changed it to false then exported to OVF ag... See more...
Thanks for the reply. No need to upload as I have it working error-free now. Checked the .vmx on the workstation-side, SATA0 present=TRUE was listed. I changed it to false then exported to OVF again.  This time no errors were generated upon deployment and no mention of vmware.sata.ahci is in the resulting ovf file.
Hello to the community, I'm building some OVF templates this week on VMware Workstation 10, using virtual HW version 9 with ESX compatibility enabled, exported them out to a share.  The VM is a W... See more...
Hello to the community, I'm building some OVF templates this week on VMware Workstation 10, using virtual HW version 9 with ESX compatibility enabled, exported them out to a share.  The VM is a Windows 8.1 with SCSI HD.  I attempted to deploy it on one of the ESXi 5.5 hosts and immediately received an error regarding 'unsupported virtual hardware device vmware.sata.ahci'. This one has me kind of stumped. Opened up the ovf and see it has the typical scsi controller with lsilogicsas, I know the optical drive I have set to IDE -- having that on sata was leading to another ovf deployment issue earlier this year and that was my workaround for that one. I'm suspecting maybe this entry is just leftover from that original drive? Any thoughts?
Ran into that problem using our corp proxy.  Shutdowns are now fine but I'm still seeing issues intermittently with windows update -- like yesterday I had a Server 2012 guest where windows update... See more...
Ran into that problem using our corp proxy.  Shutdowns are now fine but I'm still seeing issues intermittently with windows update -- like yesterday I had a Server 2012 guest where windows update kept failing on me.  Nothing seemed to fix that until I went in and lowered Trusted Sites (where all of the windows update URLs are bucketed) from Medium to Medium-Low then it found 15 or so updates right away. Just one of several nagging issues with these new OSs.
For compatibility looks like the SAN had 4.1 support but I don't see it for 5.1.  It's more of a test-dev system anyway so I'm not necessarily concerned with full support in this case, I mainly w... See more...
For compatibility looks like the SAN had 4.1 support but I don't see it for 5.1.  It's more of a test-dev system anyway so I'm not necessarily concerned with full support in this case, I mainly want to get it to 5.1 without losing the memory. Will shutdown the VMs instead of pausing. Thanks.
Can anyone provide me with a quick overview of the upgrade process and any gotchas I might run into when upgrading with this config? The box is a Dell Poweredge T710 with 148GB RAM with two p... See more...
Can anyone provide me with a quick overview of the upgrade process and any gotchas I might run into when upgrading with this config? The box is a Dell Poweredge T710 with 148GB RAM with two physical CPUs. ESXi 4.1 is installed on a 2GB flash drive plugged into the motherboard. Other storage is as follows: Six Cheetah SAS drives on a RAID10 internal, those are plugged into an Adaptec 5805 card. A Nexsan Sataboy SAN plugged into the server directly with fiber. This server started out with just the local storage and then the SAN was added later, so there are VMs spread across both arrays. There are also 18 user named resource pools that are being used for organizational purposes as opposed to anything 'resource' related. I'm thinking I'm going to lose the resource pools and will probably replace those with similarly named folders after the upgrade so I took screenshots of the inventory in prep for this.  But from a high-level my plan is as follows: Pause the running VMs. Place the host into maint mode. Shutdown the host. Pull the flash drive and make a copy of the flash drive's current state just in case things go wrong. I have an identical stick set aside for this purpose. Reinstall the flash drive. Boot to ESXi 5.1 CD. And here's where things start to get fuzzy for me... Upgrade the existing installation on the flash drive.  Can I do this with a host that has this much memory installed? Assuming I can, at this point I'm expecting to be on ESXi5.1 with only 32GB of usable RAM. Can I then just install the vSphere 5 Enterprise license for 2 CPUs on this host to reclaim all 148GB of RAM? Any suggestions or clarifications are appreciated.
Discovered it's something to do with the proxy. Found this on google: Run command prompt (admin) Netsh Winhttp Import proxy source=ie These commands seemed to have cleared up both th... See more...
Discovered it's something to do with the proxy. Found this on google: Run command prompt (admin) Netsh Winhttp Import proxy source=ie These commands seemed to have cleared up both the windows update hang as well as the slow shutdown.  No idea why this is necessary with respect to windows update and not other services, especially when the machine has internet access and is able to auto activate over the internet. Anyway, hope this helps someone.
Maybe two months ago I built some templates for Windows 8 on Workstation 9 and set them up with ESX compatibility so they could be deployed on vSphere 5.1.  Well we just upgraded to 5.1 and I'm g... See more...
Maybe two months ago I built some templates for Windows 8 on Workstation 9 and set them up with ESX compatibility so they could be deployed on vSphere 5.1.  Well we just upgraded to 5.1 and I'm getting my first chance to deploy these OVFs on vSphere.  I'm seeing a couple of performance issues with these guests, first thing, the shutdowns take several minutes, is anyone else seeing this, is it a bug with Windows 8? misery would love company here.  The other thing I'm seeing is when trying get the latest Windows updates, to even navigate to the Windows update UI where you can click the button to check for updates there's a huge hang time of sometimes several minutes, i have no clue what this is from either, I'm trying to figure out if these are strange issues with my templates or if others see this too. On Win8 try right-clicking My Computer>Properties> Control Panel home>System and Security> click Windows Update> at this point for me the hourglass spins and spins, there's no change for literally like five minutes. Then finally the UI will present you with the button to check for updates which you can click and wait for yet more minutes but this latter wait is at least expected behavior. We're also seeing the same problem when hosting on Workstation, so it's not a vSphere / ESXi thing. Aside from those annoyances the performance of the Windows 8 OS seems to be fine, they aren't slow with everything just these few oddities. Never had these issues with Windows 7 VMs. 
Thanks a ton for the reply I'll be taking your advice seriously. We purchased Ent for 8 CPUs and vCenter std, I don't know about the DRS yet, I'm trying to get further confirmation regarding e... See more...
Thanks a ton for the reply I'll be taking your advice seriously. We purchased Ent for 8 CPUs and vCenter std, I don't know about the DRS yet, I'm trying to get further confirmation regarding exactly what we have. For the SAN we're running a Satabeast on sometghing like two 16 drive RAID10 arrays over iSCSI each server points to one array and four 2TB LUNs / stores per array. I'll report back on how things go or if we run up against any issues.
Sometime within the next 60 days or so we're planning to swap out two currently running ESXi 4.1 servers in our datacenter with two new Poweredge R710s.  The currently running servers are hosting... See more...
Sometime within the next 60 days or so we're planning to swap out two currently running ESXi 4.1 servers in our datacenter with two new Poweredge R710s.  The currently running servers are hosting something like 25 VMs each. We're on the free hypervisor right now however we've just purchased licenses for vSphere 4 and vCenter.  We will definitely want to upgrade our keys to run on vSphere 5 as soon as it becomes available.  I'm looking for advice on how to make this upgrade as painless as possible -- a few concerns of mine: We're planning to upgrade the controllers on our SAN The new R710 servers have fibre HBAs and we're planning to direct connect these from the servers to the SAN. We have no fibre switches in this config and we aren't concerned with any redundancy with respect to this as it's just a test/dev environment as opposed to production IT As I see it a big complication is we're running on iSCSI now and want to switch everything over to fibre on the new servers Complication number two is I have tons of resource pools defined on both hosts and ideally I want to replicate / restore the configs from the old hosts to the new ones.  Can I backup the config on both machines and then restore it to the new ones given all of the storage changes we're planning? A possible complication is vSphere 5 itself, is this just going to be the usual patch upgrade or will it require a clean install? My understanding is we're somewhat SOL with the new licensing model with respect to memory as we purchased the new servers with two physical CPUs & 192GB memory each, probably not much we can do about that.   Thanks in advance for any advice.