digitalnomad's Posts

Hi All, I need some advice on my proposed approach for a new DR POD that we're standing up. I currently have 2 sites Prod and DR that are using SRM 5.8.1 that was an upgraded and legacy env... See more...
Hi All, I need some advice on my proposed approach for a new DR POD that we're standing up. I currently have 2 sites Prod and DR that are using SRM 5.8.1 that was an upgraded and legacy environment..4-5.1-5.5-5.8. Satellite offices in the Prod VC fail into DR using Mirrorview. I'm setting up a new DR POD (NW,SAN,Servers) to be shipped to a hosted DC and have created a new VMWare infrastructure to support it with replication based off recoverpoint The approach I've outlined below is based off some notes mostly on the 6.x but I haven't seen any specific procedures documented for 5.8.1. Does this seem like a feasible approach? Any comments on what to be prepared for or lessons learned? Cold Snap all existing involved servers and DBs including existing VCenters Remove the existing Protection\Groups, gut the configuration to a setup default: Delete Recovery Plans Remove protection from VMs Delete Protection Groups Delete all inventory mappings Remove the placeholder datastores Remove the array managers Break the pairing Uninstall the SRAs at the OLD Recovery Site Perform and uninstall of the SRM  from the existing OLD recovery site box and remove the plugin from the VCenter Install the new SRM Node in the Hosted DR as the default Site with EMC RecoverPoint SRA v 2.2 If the install fails to mate with the Prod VCenter, we need to be prepared to completely gut both SRM sites from the environment and reinstall from scratch Many Thanks, DGN
Looking for a little help on some advanced SAN configuration work I'm in the middle of working out some storage issues out with my SAN team and EMC. One issue that surfaced was the use of the ... See more...
Looking for a little help on some advanced SAN configuration work I'm in the middle of working out some storage issues out with my SAN team and EMC. One issue that surfaced was the use of the MASK_PATH statement in creating claimrules to eliminate errors associated with PowerPath claiming the presented XIO controller (EMC-KB 000487098). The XIO Controller is also assuming a LUN0 id although No bootable device is presented. The subsequent deluge of errors that surfaces is... "WARNING: ScsiClaimrule: 1318: Path vmhba4:C0:T5:L0 is claimed by plugin NMP, but current claimrule number 340 indicates that it should be claimed by plugin PowerPath." SAN and Support Says up and down to follow the technote although I found another article on their support site that says to discount the errors (EMC Art # 000468047) I'm curious if anyone else has crossed this issue and whether they have worked this path to resolution or just left sleeping dogs lie? Also am I approaching the mask path statement rules correctly? So our hosts have 4 adapters with 4 paths to each XIO device, bringing the total to 16 paths If I'm going to implement this fix and mask the controller entirely, I suspect I'm going to have to create 16 individual claim rules to account for each path? If so, Thank god they didn't go with 8paths.... Many Thanks, DGN Follow-up ... Yes the correct approach was creating one rule for each path however I ultimately abandoned this approach because it complicated the host configuration. No adverse effects have been have surfaced Message was edited by: digitalnomad Tested and validated
Being Adventurous and having the project pushed back a little, I decided to venture down the path of the "enhanced configuration under 5.5 as well as creating a LAG. I created 2 native 5.5 native... See more...
Being Adventurous and having the project pushed back a little, I decided to venture down the path of the "enhanced configuration under 5.5 as well as creating a LAG. I created 2 native 5.5 native enhanced VDSs, one for internal use with 2 10Ge (1aside\1bside)and the other for external use with 4x 10Ge (2aside\2bside) Using KB2051826, I tried creating a 4 port LAG which was easy enough however we could not get the vPCs up no matter what Load Balancing settings we used. Even resorting to no LAG configuration with mimic'd settings to parallel configuration using "Route based on IP hash". Neither the channels nor the VPC would come up. I ended up calling VMWare Support with my network tech with the initial belief that we may have needed an a ASide LAG of 2 and a BSide LAG of 2. The first tech had no understanding of the constructs and after 2 hours on the phone with a Network Escalation Tech who ran in circles...we abandoned the effort. In the end, I gutted the Native 5.5 vds's from the hosts then recreated the vDSs as 5.1 vDSs. I upgrade to 5.5 but did not choose enhanced mode and left it in basic. I changed the LACP settings of the Port Groups to "Route based on IP Hash" then migrated my physical NICs over to the uplinks. A celebratory reboot and all came up well. We're perplexed. Could this be some weird firmware functionality bug ? Regards DGN
Hello All, I'm a little confused with the new enhanced features of the 5.5 vDS and trying to determine the best implementation for the front end of our VMWare infrastructure...any guidance wou... See more...
Hello All, I'm a little confused with the new enhanced features of the 5.5 vDS and trying to determine the best implementation for the front end of our VMWare infrastructure...any guidance would be appreciated because I can't seem to determine whether advanced configuration is necessary. Here's the hardware and network setup I'm working with: Each Host  has 6x 10Ge Nics, 2 for host operations (MGMT, Vmotion) and 4 for Data (External VLANs). Network connections are trunked 2 vLans for Host OPs and 17+ for Data. Physical Links are split between Aside\Bside Fexs 2232 to 5000s.. The Data Ports are bound as LACP pairs then VPC'd  for 40 GB of available Data and 20 GB for console. I've seen some of Chris Wahl's videos, his Networking book and articles but am trying to determine if I need to switch to enhanced mode and go through LAG configuration? Is this a requirement? The other vDSs in the environment are either in basic mode probably upgraded by my predacessor from 4.x or a 1000v that needs to be phased out because of lack of Network team support. Thanks in Advance...DGN
Thanks Cop.... owe ya a signal 8 I agree but HP has been having some severe problems with their software builds of late especially with their Utilities bundle crappin the hosts. Hopefully they... See more...
Thanks Cop.... owe ya a signal 8 I agree but HP has been having some severe problems with their software builds of late especially with their Utilities bundle crappin the hosts. Hopefully they can add a little qa to their software builds. That basically nails the symptomology.... So in Summary HPSA driver later than v60 driving a HP Smart Array p410 controller of any firmware vintage with a spare drive configured. The spare drive configuration seems to cause an i/o back-feed into the driver blowing it up at irregular intervals. This issue will not surface unless a spare drive is configured on the array controller. Existing HP case # from VMWare Engineering is 4651641937   Feel free to jump on the boat this is a long standing issue
Ouch and yes you should be concerned..even though they have done some substantial work on APDs over ESXi4x... the risk is stll there...think about clusters and how you would react if kneecapped..... See more...
Ouch and yes you should be concerned..even though they have done some substantial work on APDs over ESXi4x... the risk is stll there...think about clusters and how you would react if kneecapped...hopefully your hosts haven't zombied. Check responsiveness to running commands via telnet or console. If its unresponsive, you may have to RDP into the guests on the hosts and shut them down then cold boot them. Run an RVTools export on what vms are where or make a list in case you loos some vmx files. If not and they are responsive, vacate the host and bounce it I had my storage team yank some drives from one of my clusters w/o checking with me first recently and put together this little ditty from some good posts on the matter...a practice they will never do again after the outage they cased. I'd give credit where its do on this one but I never tracked the original posts...as always thank you... Best Practice: How to correctly remove a LUN from an ESX host WHY Yes, at first glance, you may be forgiven for thinking that this subject hardly warrants a blog post. But for those of you who have suffered the consequences of an All Paths Down (APD) condition, you'll know  why this is so important. Let's recap on what APD actually is. APD is when there are no longer any active paths to a storage device from the ESX, yet the ESX continues to try to access that device. When hostd tries to open a disk device, a number of commands such as read capacity and read requests to validate the partition table are sent. If the device is in APD, these commands will be retried until they time out. The problem is that hostd is responsible for a number of other tasks as well, not just opening devices. One task is ESX to vCenter communication, and if hostd is blocked waiting for a device to open, it may not respond in a timely enough fashion to these other tasks. One consequence is that you might observe your ESX hosts disconnecting from vCenter. We have made a number of improvements to how we handle APD conditions over the last number of releases, but prevention is better than cure, so I wanted to use this post to highlight once again the best practices for removing a LUN from an ESX host and avoid APD: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2004605 http://www.definit.co.uk/2013/08/vsphere-basics-correctly-decommissioning-a-vsphere-datastore/ http://www.reddit.com/r/vmware/comments/2y3hz9/one_does_not_simply_remove_a_datastore_from/
Darn, couldn't reconfigure the spare on the fly even with advanced feature pack.All, my G7s are at remote sites as well. Can anyone confirm that the trigger is the actual "spare" configuration... See more...
Darn, couldn't reconfigure the spare on the fly even with advanced feature pack.All, my G7s are at remote sites as well. Can anyone confirm that the trigger is the actual "spare" configuration. Here's latest update from HP This e-mail is with reference to the case number: xxx logged for DL580 G7. I could gather that 4651641937 is already elevated and the Level 2 support are working on the issue. Please confirm if I can go ahead and close the case xxxx or keep the case open. If the issue is not resolved or you need further assistance please get back to us on chat for further support. We are available 24x7 at www.hp.com/go/hpchat . Thank you for contacting HP! Checks in the mail.... DGN
I have confirmed that my drives are in a mirror configuration with dedicated spare.... not Raid 5 configuration. I'll try running an reconfigure and see if that will alleviate the errors however ... See more...
I have confirmed that my drives are in a mirror configuration with dedicated spare.... not Raid 5 configuration. I'll try running an reconfigure and see if that will alleviate the errors however due to compliance issues, can't run in that configuration. For the record, I tried every release of the driver 116,114,84 and 74 failed. I had to rev back to 60 I tied this back to VMWare Engineering's case with HP which is ongoing 4651641937 Regards DGN
Especially when coming into a new shop... I'm always interested in knowing what the full build and source of a host is which is why I like Running  "esxcli software profile get" from a telnet... See more...
Especially when coming into a new shop... I'm always interested in knowing what the full build and source of a host is which is why I like Running  "esxcli software profile get" from a telnet session. It will also give me source media and maintenance history, you never know what your predecessor may have cooked up.... (Updated) HP-ESXi-5.5.0-Update2-iso-550.9.2.30.9    Name: (Updated) HP-ESXi-5.5.0-Update2-iso-550.9.2.30.9 <<<<<Indicates factory HP Custom CD    Vendor: schmoozvmhost    Creation Time: 2015-09-15T21:37:39    Modification Time: 2015-09-15T21:37:39    Stateless Ready: True    Description:       2015-09-15T21:37:38.861718+00:00: The following VIBs are       installed:         net-tg3       3.137h.v55.1-1OEM.550.0.0.1331820       ----------       2015-09-15T21:37:28.015677+00:00: The following VIBs are       installed:         qlnativefc    1.1.48.0-1OEM.550.0.0.1331820       ----------       2015-09-15T21:37:22.765301+00:00: The following VIBs are       installed:         scsi-hpsa     5.5.0.106-1OEM.550.0.0.1331820       ----------       2015-09-15T21:02:48.486771+00:00: The following VIBs are       installed:         powerpath.cim.esx     5.9.1.02.00-b054         powerpath.plugin.esx  5.9.1.02.00-b054         powerpath.lib.esx     5.9.1.02.00-b054       ----------       2015-09-15T20:33:04.191050+00:00: The following VIBs are       installed:         hp-ams        550.10.2.0-22.1198610         hpssacli      2.20.11.0-5.5.0.1198611         hp-esxi-fc-enablement 550.2.3.16-1198610         hp-smx-provider       550.03.08.00.12-1198610       ----------       2015-09-15T20:11:20.432420+00:00: The following VIBs are       installed:         scsi-lpfc820  8.2.4.151.65-1OEM.500.0.0.472560         scsi-qla2xxx  934.5.20.0-1OEM.500.0.0.472560         misc-drivers  5.5.0-2.62.2718055       ----------       2015-09-15T20:01:05.605100+00:00: The following VIBs are       installed:         tools-light   5.5.0-2.62.2718055         esx-base      5.5.0-2.62.2718055       ----------       HP Custom Image Profile for ESXi 5.5.0 ISO    VIBs: ata-pata-amd 0.3.10-3vmw.550.0.0.1331820, ata-pata-atiixp 0.4.6-4vmw.550.0.0.1331820, ata-pata-cmd64x 0.2.5-3vmw.550.0.0.1331820, ata-pata-hpt3x2n 0.3.4-3vmw.550.0.0.1331820, ata-pata-pdc2027x 1.0-3vmw.550.0.0.1331820, ata-pata-serverworks 0.4.3-3vmw.550.0.0.1331820, ata-pata-sil680 0.4.8-3vmw.550.0.0.1331820, ata-pata-via 0.3.3-2vmw.550.0.0.1331820, block-cciss 3.6.14-10vmw.550.0.0.1331820, char-hpcru 5.5.6.6-1OEM.550.0.0.1198610, char-hpilo 550.9.0.2.3-1OEM.550.0.0.1198610, ehci-ehci-hcd 1.0-3vmw.550.0.0.1331820, elxnet 10.2.445.0-1OEM.550.0.0.1331820, esx-base 5.5.0-2.62.2718055, esx-dvfilter-generic-fastpath 5.5.0-0.0.1331820, esx-tboot 5.5.0-2.33.2068190, esx-xlibs 5.5.0-0.0.1331820, esx-xserver 5.5.0-0.0.1331820, hp-ams 550.10.2.0-22.1198610, hp-build 550.9.2.30.9-1198610, hp-conrep 5.5.0.1-0.0.8.1198610, hp-esxi-fc-enablement 550.2.3.16-1198610, hp-smx-provider 550.03.08.00.12-1198610, hpbootcfg 5.5.0.02-01.00.5.1198610, hpnmi 550.2.3.5-1198610, hponcfg 5.5.0.4.4-0.3.1198610, hpssacli 2.20.11.0-5.5.0.1198611, hptestevent 5.5.0.01-00.01.4.1198610, ima-be2iscsi 10.2.250.1-1OEM.550.0.0.1331820, ima-qla4xxx 500.2.01.31-1vmw.0.3.100400, ipmi-ipmi-devintf 39.1-4vmw.550.0.0.1331820, ipmi-ipmi-msghandler 39.1-4vmw.550.0.0.1331820, ipmi-ipmi-si-drv 39.1-4vmw.550.0.0.1331820, lpfc 10.2.455.0-1OEM.550.0.0.1331820, lsi-mr3 0.255.03.01-2vmw.550.1.16.1746018, lsi-msgpt3 00.255.03.03-1vmw.550.1.15.1623387, misc-cnic-register 1.710.70.v55.1-1OEM.550.0.0.1331820, misc-drivers 5.5.0-2.62.2718055, mtip32xx-native 3.3.4-1vmw.550.1.15.1623387, net-be2net 4.6.100.0v-1vmw.550.0.0.1331820, net-bnx2 2.2.5f.v55.16-1OEM.550.0.0.1331820, net-bnx2x 2.710.70.v55.7-1OEM.550.0.0.1331820, net-cnic 2.710.70.v55.5-1OEM.550.0.0.1331820, net-e1000 8.0.3.1-3vmw.550.0.0.1331820, net-e1000e 1.1.2-4vmw.550.1.15.1623387, net-enic 1.4.2.15a-1vmw.550.0.0.1331820, net-forcedeth 0.61-2vmw.550.0.0.1331820, net-igb 5.2.7-1OEM.550.0.0.1331820, net-ixgbe 3.21.4-1OEM.550.0.0.1331820, net-mlx4-core 1.9.9.0-1OEM.550.0.0.1331820, net-mlx4-en 1.9.9.0-1OEM.550.0.0.1331820, net-mst 2.0.0.0-1OEM.550.0.0.600000, net-nx-nic 5.5.641-1OEM.550.0.0.1331820, net-qlcnic 5.5.190-1OEM.550.0.0.1331820, net-tg3 3.137h.v55.1-1OEM.550.0.0.1331820, net-vmxnet3 1.1.3.0-3vmw.550.2.39.2143827, ohci-usb-ohci 1.0-3vmw.550.0.0.1331820, powerpath.cim.esx 5.9.1.02.00-b054, powerpath.lib.esx 5.9.1.02.00-b054, powerpath.plugin.esx 5.9.1.02.00-b054, qlnativefc 1.1.48.0-1OEM.550.0.0.1331820, rste 2.0.2.0088-4vmw.550.1.15.1623387, sata-ahci 3.0-21vmw.550.2.54.2403361, sata-ata-piix 2.12-10vmw.550.2.33.2068190, sata-sata-nv 3.5-4vmw.550.0.0.1331820, sata-sata-promise 2.12-3vmw.550.0.0.1331820, sata-sata-sil 2.3-4vmw.550.0.0.1331820, sata-sata-sil24 1.1-1vmw.550.0.0.1331820, sata-sata-svw 2.3-3vmw.550.0.0.1331820, scsi-aacraid 1.1.5.1-9vmw.550.0.0.1331820, scsi-adp94xx 1.0.8.12-6vmw.550.0.0.1331820, scsi-aic79xx 3.1-5vmw.550.0.0.1331820, scsi-be2iscsi 10.2.250.1-1OEM.550.0.0.1331820, scsi-bfa 3.2.5.0-1OEM.550.0.0.1331820, scsi-bnx2fc 1.710.70.v55.3-1OEM.550.0.0.1331820, scsi-bnx2i 2.710.70.v55.6-1OEM.550.0.0.1331820, scsi-fnic 1.5.0.4-1vmw.550.0.0.1331820, scsi-hpdsa 5.5.0.26-1OEM.550.0.0.1331820, scsi-hpsa 5.5.0.106-1OEM.550.0.0.1331820, scsi-hpvsa 5.5.0-92OEM.550.0.0.1331820, scsi-ips 7.12.05-4vmw.550.0.0.1331820, scsi-lpfc820 8.2.4.151.65-1OEM.500.0.0.472560, scsi-megaraid-mbox 2.20.5.1-6vmw.550.0.0.1331820, scsi-megaraid-sas 5.34-9vmw.550.2.33.2068190, scsi-megaraid2 2.00.4-9vmw.550.0.0.1331820, scsi-mpt2sas 15.10.06.00.1vmw-1OEM.550.0.0.1198610, scsi-mptsas 4.23.01.00-9vmw.550.0.0.1331820, scsi-mptspi 4.23.01.00-9vmw.550.0.0.1331820, scsi-qla2xxx 934.5.20.0-1OEM.500.0.0.472560, scsi-qla4xxx 644.55.35.0-1OEM.550.0.0.1331820, tools-light 5.5.0-2.62.2718055, uhci-usb-uhci 1.0-3vmw.550.0.0.1331820, xhci-xhci 1.0-2vmw.550.2.39.2143827
Yes this should work and since it sounds like development over production....go for it. The issue surfaces when your accumulated diff files go beyond 72 hours. The amount of Disk I/O and time it ... See more...
Yes this should work and since it sounds like development over production....go for it. The issue surfaces when your accumulated diff files go beyond 72 hours. The amount of Disk I/O and time it takes to commit changes increases substantially. If your using other snapshot based technologies for vmdk image backups, this can also increase the risk of orphaning the diff files or corrupting the vmdks. Regards DGN
Exchange deserves its own level of caution..... are you also changing storage frames in the process? Because of the performance issues with exchange, DAGs and high perf SQL on frames, we found... See more...
Exchange deserves its own level of caution..... are you also changing storage frames in the process? Because of the performance issues with exchange, DAGs and high perf SQL on frames, we found that investing in Storage Foundations gave us a lot of flexibility. Thinking out loud here, you probably want to keep the pRDMs for performance reasons however the local system partitions hopefully are vmdks and the data volumes are pRDMs, I'd want to have a back-out plan in case your vhardware\tools upgrades go bad consider this. add 7 pRDMs in vmfs to your existing dags, create mirror for your drives...hopefully the partitions are already dynamic and let them replicate and build the mirrors. Hell you could even mirror to vmdks. On go night set all your svcs to manual using msconfig, then shut down the server (Make sure you document the complete configuration of your server, pointers and devices) Clone your existing server or copy the VMDKs to the new storage. Move the pRDM copy LUNs to the target New VMs Configuration, choose "you copied the files" on boot and rename the vm then straighten out your Volumes (import Disk etc.) insuring correct paths. Once all is up then upgrade your tools then hardware. Test (Beware of mac issues) and if all goes well you're done...if it doesn't, shut it down then power up the original box and plan accordingly. Put it in a lab and test out, read up on MS Software Mirroring Hope this helps..good luck DGN
Found a version 116 on the HP website released 11/30/15 [ https://h20565.www2.hpe.com/hpsc/swd/public/detail?sp4ts.oid=4142793&swItemId=MTX_ae7b6b8db7044b5b89455d4e47&swEnvOid=41… ] and issue sti... See more...
Found a version 116 on the HP website released 11/30/15 [ https://h20565.www2.hpe.com/hpsc/swd/public/detail?sp4ts.oid=4142793&swItemId=MTX_ae7b6b8db7044b5b89455d4e47&swEnvOid=41… ] and issue still surfaces
Apparently Problems still exist with the updated 114 HPSA Driver opened a new thread Datastore / Disk latency problems with HP ProLiant G7 - HP Smart Array P410i controller " WARNING: LinScsi:... See more...
Apparently Problems still exist with the updated 114 HPSA Driver opened a new thread Datastore / Disk latency problems with HP ProLiant G7 - HP Smart Array P410i controller " WARNING: LinScsi: SCSILinuxAbo…
After updating a mixture of G7, G8 and G9 VMHosts to 5.5 update 3a and Cookbook release September 2015 (SPP JUN 15). I started having host errors specifically on my G7 hardware. One host went so ... See more...
After updating a mixture of G7, G8 and G9 VMHosts to 5.5 update 3a and Cookbook release September 2015 (SPP JUN 15). I started having host errors specifically on my G7 hardware. One host went so far as to disconnect from VCenter. I went the full boat update on the OS to bring it fully in line with HP's recipe. This was the first time hitting drivers in quite a while. So when the first errors started coming in, I immediately suspected the HPSA v106 ( Version:5.5.0.106-1OEM ) which was updated from v50 ( Version:5.5.0.50-1OEM) VCenter is reporting errors Lost access to volume 56424481-7f094eb0-8ee6- 80c16e6e15e0 (VMHost_local) due to connectivity issues. Recovery attempt is in progress and outcome will be reported shortly. info 2/2/2015 9:00:45 AM (VMHost_local) VMKernel.log was reporting some conflict's with claim rules between PowerPath and the NMP for the local disk but that was cleared. 2015-11-29T07:53:04.162Z cpu22:33327)WARNING: LinScsi: SCSILinuxAbortCommands:1843: Failed, Driver hpsa, for vmhba1 Hostd.log 2015-11-30T19:37:48.614Z [248C4B70 info 'Vimsvc.ha-eventmgr'] Event 1820 : Lost access to volume 4ffd89b4-760e9689-81e3-e83935a81a45 (gldpiesx002_local) due to connectivity issues. Recovery attempt is in progress and outcome will be reported shortly. 2015-11-30T19:37:48.615Z [248C4B70 info 'Vimsvc.ha-eventmgr'] Event 1821 : Successfully restored access to volume 4ffd89b4-760e9689-81e3-e83935a81a45 (gldpiesx002_local) following connectivity issues. I opened tickets with both HP and VMWare VMWare came back with the first fix which was to upgrade the hpsa driver(hpsa 5.5.0.114-1OEM) which can be downloaded from https://my.vmware.com/web/vmware/details?downloadGroup=DT-ESXI55-HP-HPSA-550114-1OEM&productId=353. however overnight , the errors returned. HP has suggested back-revving to 5.5.0.74-1(1 Oct 2014) but from a previous discussion here [ Datastore / Disk latency problems with HP ProLiant DL380 G7 - HP Smart Array P410i controller after SPP 2014.09.0 / hpsa…   ]  I believe that was also a bad version. I'm going to wander down the road a bit and see where it leads and if necessary go back to the 70 or even 50 release Comments welcome
VCenter 5.5 upd 2e with SRM 5.5.1.5 Well  the word has come down from the corporate security gods...it will be done. On Servers, Disable all Auth less than TLS1.1 now a combined PCI and HIPPA ... See more...
VCenter 5.5 upd 2e with SRM 5.5.1.5 Well  the word has come down from the corporate security gods...it will be done. On Servers, Disable all Auth less than TLS1.1 now a combined PCI and HIPPA requirement. So I opened a ticket with VMWare to confirm a process. As we move forward there's no doubt security requirements will rise, we have a tendency to be on the almost bleeding edge of the security knife. What's sad is there seems to be no comprehensive guide from VMWare on critical security configuration practices at this level let alone the certificate discussion. There has to be something on the government side because the server hardening Guide\ excel spreadsheet doesn't cover current needs. I  went to TLS 1.0 on my VCenter and all looked good except it immediately broke the connections with my SRM server. The following command can be used to confirm connection status from the bin directory of your openssl install openssl.exe s_client -connect [VMHostFQDn]:443 -ssl3 openssl.exe s_client -connect [VMHostFQDn]:443 -tls1 Now after 3 sessions  with support,  SRM is SSLv3 dependent  so looks like I'm getting a security exception Regards, DGN
82 days after last reboot and remediation- Experienced the below problem (issue in faulty ams package) Consult vSphere 5.5 VM console access errors / MKS connection terminated by server &amp;... See more...
82 days after last reboot and remediation- Experienced the below problem (issue in faulty ams package) Consult vSphere 5.5 VM console access errors / MKS connection terminated by server &amp; MKS malformed repsonse from server
Well it looks as though this issue is still alive and well. I worked with VMWare Support to identify the root cause of the failures and after checking VMKwarning.log it was confirmed that it was ... See more...
Well it looks as though this issue is still alive and well. I worked with VMWare Support to identify the root cause of the failures and after checking VMKwarning.log it was confirmed that it was yet another issue with the hp ams service. In our case we run a scratch off a different partition and this took 82 days, almost exactly to surface. Here's the details on the build. I'm still awaiting feedback from HP on the "correct" way to remediate the issue. Frankly, this is my 2nd go around with similar issues that have hazarded my environment in 3 months and I'm seriously considering just removing it altogether. Here's details on our affected build HP DL560 G8 ESXi 5.5.0 189274 FW  02-2014 B HP Offline Management Bundle v 1.7-13 AMS V 550.9.6.0-12.1198610 Build Original HP Custom CD 5.5x Reported Errors on Affected Hosts "Console" Can't Fork VMotion Timed Out VCenter Host Configuration Security Profile Access "Call "HostImageConfigManager.QueryHostAcceptanceLevel" for object "imageConfigManager-252705" on vCenter Server "your vcenters fqdn" failed. vmwarning.log "2014-10-14T20:20:53.668Z cpu39:46241)WARNING: Heap: 4128: Heap_Align(globalCartel-1, 136/136 bytes, 8 align) failed. caller: 0x41801ffe5429" Avamar 7.01 Backup and Restore Failure, Restore Error 10011 Failed to write to disk, Backup failure Snapshot failure, Recommended Remediation 1) Lower memory overhead on effected hosts by shutting off 25 % or more of VMs 2) Login to Tech support mode 3) Run the following commands •Stop the service via SSH or the ESXi Shell: /etc/init.d/hp-ams.sh stop •Disable the service at boot via SSH or the ESXi Shell: chkconfig hp-ams.sh off < At this point host operations should return to normal, backups, vmotion> No immediate host reboot has been required thus far Long Term: I'm trying to work with HP to identify a Solid Remediation process. It has been stipulated by HP's support team that you must remove the vib and reboot prior to updating the AMS. What may be happening is that not all effectid components are removed and simply applying the subsequent patch or full bundle may not refresh all components Here's a list of resources on the subject matter thus far. VMWare KB2085618 HP mmr_kc-0120902  http://tinyurl.com/mxkt9es HP FW http://tinyurl.com/k6hebz4
I have several compiled ISOs, 3 instances of VUM and files everywhere. The versioning and recipe was in my predecessor's head when he left
Hello All, I recently came into a new environment and am trying to piece together the previous admins recipe for build in 4 separate environments with 5 different hardware platforms. The envir... See more...
Hello All, I recently came into a new environment and am trying to piece together the previous admins recipe for build in 4 separate environments with 5 different hardware platforms. The environment is 80% virtualized and all production with a lot of rumored hardware\firmware\driver tweaks. In all honest I can't tell whether he used the HP Custom image with VIB tweaks, VMWare Base with VIB Modification and to what level these were customized. Being production, I'm adopting a least impact methodology since I have to maintain security regulatory compliance standards. Is there any easy way to look at the contents of an image file originally built with image builder and determine what the source was and what 3rd party vibs were imported? Many Thanks for any suggestions that can be rendered, DGN
Binding to a limited set of hosts within a cluster is not preferable...too limiting in management's opinion. Personally I'd replace them all with VCS using VMDK's but the cost is prohibitive