T_16's Posts

Is there any sort of actual quality control any more at VMWare?   I ask, because almost everything gets worse, and makes my life more and more difficult with every iteration of "progress".   Take... See more...
Is there any sort of actual quality control any more at VMWare?   I ask, because almost everything gets worse, and makes my life more and more difficult with every iteration of "progress".   Take for example something so very simple... the recent tasks area at the bottom of the screen in vCenter 8.0.2. You can resize all the columns so that you can get back to the nice condensed single line format, without word wrapping in the boxes, except you cant. All the columns are resizable, except the very last one, so NO MATTER what you do, the final Server Column will ALWAYS WORD WRAP, so you are left with hideous fat rows, meaning you cant really see what is going on. I am almost 100% sure it was always the case that all columns could be resized, but it just smacks of sheer laziness, or poor quality control, and makes me wonder if anyone is actually testing anything at VMWare any more. Some of the UI decisions are utterly terrible, it genuinely makes me question the "skills" of the person doing the coding, or making such moronic decisions like having little X's on the icons in the tree, which makes it look like something is wrong. I've lost faith in VMWare products, and the so called quality control. You now notice so many little bugs, foibles, and irritations which never used to be there before, and simply make your life a misery when it comes to basic administration. The new UI is absolute rubbish. I struggle to make out the icons, and the lack of colours makes everything ugly. It boggles my mind that a huge team of UI "experts" can make such shocking decisions, but I guess UI guys have to be seen to be doing SOMEthing right? VMWare, get a grip, I'm close to ditching all your products and shoving everything into the cloud, I have reached the point where its not worth the expensive outlay any more. Sorry. Round in circles we go until another genius has a bright idea for yet another UI revamp, making things worse, and round and round we go.   
Never heard of that plug in, but it certainly did the trick, wonderful advice. Thank you.  
  Do not hold your breath, same issue in 7.0u3d. Seems to fail if we select the current image.    
Of course. That is what we pay thousands and thousands and thousands of pounds a year for. Fixing and chasing bugs in VMware products is costing us too much now.      
Still there for us on vCenter 7.0u3d. Shameful quality control. Imagine not addressing such a fundamental bug over so many product revisions. VMware is going to the dogs, we are going to start look... See more...
Still there for us on vCenter 7.0u3d. Shameful quality control. Imagine not addressing such a fundamental bug over so many product revisions. VMware is going to the dogs, we are going to start looking elsewhere for our datacenter stuff, we cannot tolerate any more.  
Hi I just wanted to complain that the UI change for this release is absolutely terrible. The dead space around the tree list of VMs and clusters is shocking. Why round the scrollbars? Looks awful. ... See more...
Hi I just wanted to complain that the UI change for this release is absolutely terrible. The dead space around the tree list of VMs and clusters is shocking. Why round the scrollbars? Looks awful. I compared it to our older 7.0u2d and despite that taking some time to get used to, the density of information for the tree view on the left was far superior. Even looking back at some of our old v6.7 Dev systems, the UI was better, more colours, easier to read, much more density of tree view information. What on Earth is going on with the UI I have no idea now, please, please change the view back and get rid of all the dead space, it is making our daily jobs harder.  
Did this get around the super slow issue of NFC data trickling out from port 902 on each Esxi host?  
Following a copy of some VMs from one infrastructure to another, I seem unable to get the VMs to generate new vCenter Mac addresses with the Runtime ID of each new vCenter instance. I followed... See more...
Following a copy of some VMs from one infrastructure to another, I seem unable to get the VMs to generate new vCenter Mac addresses with the Runtime ID of each new vCenter instance. I followed this guide:- https://kabri.uk/2008/07/16/force-vmware-to-generate-a-new-mac-address/ and removed these entries in the VMX file:- ethernet0.addressType uuid.location = uuid.bios = ethernet0.generatedAddress = ethernet0.generatedAddressOffset = Now the VM does get a new MAC, but NOT the one with the vcenter ID in it, it gets a different one, something like 00:50:56:9d:52:05 which bears no correlation to the vCenter ID. ALso instead of being set to "VPX" which would indicate a vcenter generated MAC, this setting gets set to "generated". Slightly confused as to how I can force a MAC change allocated by vCenter instead of this "generated" address. We run dVswitches, do I have to remove the VM from the inventory and re-add? We would lose all the performance data and this might mess up our MABS backups. Anyone any idea how I can force vCenter to properly allocate a MAC to these VMs with its proper ID? Thanks, its driving me crazy.
Correct, you need no ELM for the standalone fling, but for the integrated one which registers the plug-in, you do it seems.
Ive setup two new clusters each at different geographical sites. Ive given the hosts in each cluster private provisioning and hot-vmotion traffic ip addresses on dedicated vmkernels, this is to m... See more...
Ive setup two new clusters each at different geographical sites. Ive given the hosts in each cluster private provisioning and hot-vmotion traffic ip addresses on dedicated vmkernels, this is to make use of a GRE tunnel between two core server switch stacks at each separate datacenter. This is preferable as inter DC traffic is MUCH faster and closer to line speed between the geo-locations. Using our regular MPLS network, the traffic rate can drop from GRE tunnel 440Mbps to as low as 170mbps. All good so far as we can cold migrate and hot migrate between clusters in two different sites, but means our provisioning traffic vmkernel has a 192.168.x.x address as does the hot vmotion kernel. This means the Cross vCenter vMotion Utility fling fails to import/clone some old VMs from our other setups into the new, as when connected to the target vcenter it MUST surely be looking to copy cold traffic over to the generally accessible management interface that has a "normal" routable ip in our networks. I am at a loss as to what we can do here, can vmware converter copy files directly into a vcenter like an ovf import does? We can import OVFs no issue to our new setup, but again, I suspect the fling wants to send provisioning traffic to the mgmt ip address of the esxi host it picks from the cluster. I guess it sounds like we have no choice but to give all of our hosts another 2nd ip on the network just for cold provisioning traffic, OR let all cold traffic go via mgmt as default, but this is super annoying, as it means that when doing a storage vmotion, hot traffic will be fast via our GRE tunnel, and cold will be super slow via the rest of our standard MPLS network. Dont get me wrong, the GRE tunnel still uses the SAME MPLS backbone, but its encapsulation seems to make traffic funnel through so much faster. Any thoughts/advice welcome, I had thought that it was possible two vcenters could speak with themselves regarding traffic flow instead of brokering the connection directly to the host. I am confused what happens with an Ova/Ovf import then, as surely the traffic is brokered to a host to store into the datastore? ova import works perfectly for us. Sorry for the ramble, but I feel stuck. EDIT:- the reason for this separation above, is that our 1gbe vmk0 mgmt stuff is on old 1gbe copper, and inter-host stuff is all 10gbe, so it makes sense that if we wanted to do a cold migration/storage migration between hosts in the same cluster, we make use of the rapid 10gbe local speed. EDIT2: Ok so Vmware converter works for a vm powered off, but the fling fails for the same vm with "Cannot connect to host" ! Does VMware Converter work in a different way to the fling?
I have not done so yet, as I remain un-convinced they can actually fix it.
Hi guys, this problem is driving me crazy and preventing me from putting this kit into production. We have small clusters, x6 hosts in each, esxi 6.7u3, 3par 8200 storage array, vcenter 6.7u3 ... See more...
Hi guys, this problem is driving me crazy and preventing me from putting this kit into production. We have small clusters, x6 hosts in each, esxi 6.7u3, 3par 8200 storage array, vcenter 6.7u3 also. DL360 Gen10 servers with x4 intel x710 10Gbe ports, and 32Gb SD card in each. Esxi is installed on the SD card, and the scratch location set to a shared iscsi datastore, with each host having its own folder. This appeared to work fine, until I reboot the whole cluster of 6 at the same time, where the datastore with the scratch folders randomly disappears, and the host loses its ScratchConfig.ConfiguredScratchLocation setting which reverts back to default. I have tried setting the Syslog.global.logDir setting, which I thought made an improvement, until more reboots later and the problem was still there. When the host drops its scratch location, it hangs longer on boot, and also has this error message in the log:- LVM: 15237: Failed to open device naa.60002ac0000000000000000200020cb8:1 : Atomic test and set of disk block returned false for equality It seems whenever that error happens, the datastore is dropped, and the scratch setting is lost. Its utterly infuriating! Out of desperation I removed the elx isci driver/vib, no difference. I just dont know where to go from here, as pretty much whenever we reboot a whole cluster, some of the hosts will drop their scratch location and require manual intervention to fix. Not acceptable with only a 6 host cluster! Any tips to try and fix this? It is driving me insane and making me feel useless that I cannot fix the problem. I should add it does not happen to all hosts at the same time, its more random. Of course once the setting has reverted with the error above, it needs manual intervention to fix. Also, as soon as the host is booted, I CAN access the datastore it had a problem with during boot, its not lost forever, which is even more bizzarre.
All servers are in OV, I see all the hardware information in vcenter too. I will say in the downloaded log dump from OV4VC, in the proactiveha.log file, it had a curious line... pha_resourc... See more...
All servers are in OV, I see all the hardware information in vcenter too. I will say in the downloaded log dump from OV4VC, in the proactiveha.log file, it had a curious line... pha_resources.pha_helper get_vc_ov_hosts vcenter hosts to search in ov are : ['39373638-3935-5a43-4a38-313531474647', '39373638-3935-5a43-4a38-313531474646', '...........] INFO pha_resources.pha_helper get_vc_ov_hosts hosts in both vcenter and oneview are : []
Guys what happened with this in the end? I have the exact same issue with DL360Gen10, esxi/vcenter 6.7U3, Oneview 5.2, and OV4VC 9.6. Same deal, no P-HA provider available in the cluster wi... See more...
Guys what happened with this in the end? I have the exact same issue with DL360Gen10, esxi/vcenter 6.7U3, Oneview 5.2, and OV4VC 9.6. Same deal, no P-HA provider available in the cluster with the servers, create a new emtpy cluster, voila, it appears. Move host into new cluster, and the HPE Provider disappears off the radar again, and its not even listed. This is poor. Did anyone ever get an answer?
Thanks. Love it, tried it on a test vcsa and it was all good.
Can I re-use an existing cert issued by our local AD Cert authority when re-installing a new vcsa (same hostname etc) from scratch again? I have the original .csr file, the original vmca_issued_... See more...
Can I re-use an existing cert issued by our local AD Cert authority when re-installing a new vcsa (same hostname etc) from scratch again? I have the original .csr file, the original vmca_issued_key.key, and the cert of course, is there any way that can be re-used on the replacement vcsa install? Thanks! Sorry for the dumb question.
OK I see what is happening now with the iSCSI initiator. The HPE hardware initiator is used, up until about halfway through booting, then the vmware software isci initiator takes over. I was ... See more...
OK I see what is happening now with the iSCSI initiator. The HPE hardware initiator is used, up until about halfway through booting, then the vmware software isci initiator takes over. I was confused as the vmware software initiator, takes the name off the hpe hardware initiator name set in the bios. Changing this to a vmware name after installation, shows that esxi switches to the software initiator of itself for all isci access beyond that point. That makes it a lot easier to keep a single object for the host in the storage array, and just allow two initiators to handle it. Saves confusion over naming them the same thing. EDIT:- what about ip's, I notice that esxi takes the same ip as that set in the BIOS for the initiators there. Are you for the bios based initiators to be on x.x.x.10, and x.x.x.11 then have esxi kick in and use its own ip's of x.x.x.12, and x.x.x.13, or should the bios/uefi based initiators be configured with the two ip addresses we would like in esxi iscsi initiator?
Everywhere I have looked, it says that esxi will not create a scratch partition on a boot from SAN volume. Having just installed esxi 6.7U3 on a 3par lun, I see that in the scratch partition i... See more...
Everywhere I have looked, it says that esxi will not create a scratch partition on a boot from SAN volume. Having just installed esxi 6.7U3 on a 3par lun, I see that in the scratch partition is listed as /vmfs/volumes/5eda8c99-dd914acc-f042-48df37424e58 which correlates to the remote lun. So.... should I just leave it? My question is, that once booted, does esxi switch to the running OS kernel and access this lun from its own vmware software initiator, or does it use the original HP iscsi initiator and basically share this nic between the two different initiators, and the writes to the boot lun are done by the underlying hardware/hpe network card, and then the writes to the normally configured luns are done by the vmware OS via its own software iscsi initiator? I see now that in the 3par SSMC console, on a fresh install still only the hpe hardware level initiator is active and green, so this is used then throughout the time the host is live. Would this have any detrimental impact on performance then, sharing out the lun via hardware and then the esxi software initiator? I guess this must be common, as 10gbe ports are not huge in number on many servers and setups! Any advice really welcome.
I see loginsight will literally ingest anything from any ip as long as its sent to it. How can I prevent this happening, and only allow ingestion from a selected list of machines/ip addresses?... See more...
I see loginsight will literally ingest anything from any ip as long as its sent to it. How can I prevent this happening, and only allow ingestion from a selected list of machines/ip addresses? Would I have to somehow edit the firewall on the appliance manually? Thanks
This isnt really true at all.     Two separate sites with, two separate clusters, = Enhanced link mode all day long. You can manage the whole lot from either of the two vCenters, and if on... See more...
This isnt really true at all.     Two separate sites with, two separate clusters, = Enhanced link mode all day long. You can manage the whole lot from either of the two vCenters, and if one goes down, you still can manage and have access to the other cluster. Its not true "HA" but easily the next best thing IMO.