cliess's Posts

We're seeing big fluxuations in LAN backups.. anywhere betwen 8-30MB/s. This is still much faster than we got with vRanger 3, though. Will be moving to iSCSI-based backups shortly...... ... See more...
We're seeing big fluxuations in LAN backups.. anywhere betwen 8-30MB/s. This is still much faster than we got with vRanger 3, though. Will be moving to iSCSI-based backups shortly...... -Craig
Just wanted to follow up with what we ultimately did for the test: Put one of the hosts into maintenance mode Removed it from the cluster Storage vmotioned all VMs off one of the LUN... See more...
Just wanted to follow up with what we ultimately did for the test: Put one of the hosts into maintenance mode Removed it from the cluster Storage vmotioned all VMs off one of the LUNs In Storage Manager, we created a new Host Group and moved in the test vRanger & ESX host, as well as the newly-evacuated LUN. Cabled in the test vRanger host to the SAN. Used an old 2GB HBA for this test. Restored a VM from the test vRanger host Created a new backup job for the newly-restored VM and selected to use the Fiber/iSCSI backup option. The backup ran and didn't blow anything up. Horray! Undo everything above to get the host back into the original cluster. Performance wasn't very good, but our test vRanger host is equally as poor, so we are attributing the slower-than-expected performance numbers to that. Also, we didn't bother to install the IBM DSM, so there's a chance that was holding us back as well. Thanks to the few of you who chimed in to assist! -Craig
Hi - This is exactly the route we are going. Unfortunately, it doesn't appear like you can assign LUNs to more than one host group, so the plan is to evacuate an entire LUN using Storage... See more...
Hi - This is exactly the route we are going. Unfortunately, it doesn't appear like you can assign LUNs to more than one host group, so the plan is to evacuate an entire LUN using Storage vMotion, remove one host from the cluster, create a new host group with just that one host + the vRanger server + the newly-evacuated LUN and test. Will let you know how it goes. Thanks for the reply! -Craig
Anyone? Any help would be greatly appreciated! -Craig
Hi Everyone - We're in the final stages of planning our ESX 4.0.0 Update 1 -> ESXi 4.1 upgrade and as part of this, we're going to move to LAN-free backups via the vStorage API. We've ... See more...
Hi Everyone - We're in the final stages of planning our ESX 4.0.0 Update 1 -> ESXi 4.1 upgrade and as part of this, we're going to move to LAN-free backups via the vStorage API. We've already tested this with VMs on our iSCSI SAN and it works like a champ. Last up is VMs that live on our IBM DS4800. For our test, we zoned in the HBA of our backup server (Running Windows 2003 Standard R2 w/ SP2 + vRanger Pro DPP v4.5.3) and created the host and port identifier in Storage Manager (v10.50). At this point, we were hoping to simply allow access to one of the production LUNs with some low-impact VMs on the odd chance that presenting the disk to both a Windows + ESX server would cause some kind of corruption. Unfortunately, since our ESX hosts all live within a Host Group, it appears as though the only method of testing would be to add our test backup host into our ESX server Host Group, which naturally has access to all of our VMFS LUNs.. Is this accurate? Are we going about doing this the right way? Any help would be very much appreciated. Thanks! -Craig
Hi Folks - Running into an issue here tha I can't seem to resolve. In the HP Management Homepage, the two installed HBAs (QLA2460s) aren't showing up under the 'Storage' section -- they d... See more...
Hi Folks - Running into an issue here tha I can't seem to resolve. In the HP Management Homepage, the two installed HBAs (QLA2460s) aren't showing up under the 'Storage' section -- they did under ESX 3.x without issue, though. I've installed the libraries as the readme directs and run their test application with success. Called HP support since the machine is under maintenance, and got nowhere with the tech. More or less, he really didn't really know and suggested waiting for a future revision of the agents. Is anyone else running vSphere 4 U1 w/ HP hardware + v8.3.1 management agents able to see their HBAs in the Management Homepage? Here are my system specs: HP DL580 G4, ESX 4 Update 1 build 219xxx, QLA2460 HBA, HP Management Agents v8.3.1, qlogic libraries v2 (per the readme!) Any help or insight would be greatly appreciated. Thanks! -Craig
Have you already opened up your ports? Try this; Enable firewall pass through for HP SIM 1. Enable port 2301 incoming type -> esxcfg-firewall -o 2301,tcp,in,HPSIM ... See more...
Have you already opened up your ports? Try this; Enable firewall pass through for HP SIM 1. Enable port 2301 incoming type -> esxcfg-firewall -o 2301,tcp,in,HPSIM No dice. I still see no storage systems under the two HBAs.. Thanks though! -Craig
Hi Folks - Having some difficulty seeing my storage system (IBM DS4800) from within the HP Systems Management Homepage. A few notes about the system: HP DL580 G4 2x QLA2422 HBAs ESX v3.5... See more...
Hi Folks - Having some difficulty seeing my storage system (IBM DS4800) from within the HP Systems Management Homepage. A few notes about the system: HP DL580 G4 2x QLA2422 HBAs ESX v3.5 Update 1 (Build 95350) HP Management Agents v8.1 QLogic libraries installed per their directions After installing the QLogic libraries (http://support.qlogic.com/support/oem_product_detail_vmware.asp), I could see my two HBAs in the homepage, but cannot see any of the LUNs. Any ideas? We have another ESX v3.5 cluster with an MSA1000 as shared storage, and can see our LUN on there no problem via the homepage. I could have sworn we used to be able to see the LUNs prior to HP taking the storage agent libraries out of their management package.... Thanks a ton in advance! -Craig
Looks like a wealth of info in that link, thank you.. will give it a look. I'll probably end up reinstalling 2.0.2 in test and trying the upgrade again.. it should NOT be this difficult or... See more...
Looks like a wealth of info in that link, thank you.. will give it a look. I'll probably end up reinstalling 2.0.2 in test and trying the upgrade again.. it should NOT be this difficult or confusing to retain your existing data! I never had this issue moving from 1.3.1 to 2.0.0 to 2.0.1 to 2.0.2..... Thanks again! -Craig
Absolutely selected to upgrade my existing database, no question about that! -Craig
Hi folks - Using the official VMware upgrade documentation and the RTFM upgrade guide, I upgraded my test VC 2.0.2 server to 2.5. Everything went exactly as the documentation said it would, s... See more...
Hi folks - Using the official VMware upgrade documentation and the RTFM upgrade guide, I upgraded my test VC 2.0.2 server to 2.5. Everything went exactly as the documentation said it would, screen-for-screen, but unfortunately, once I fired up the client, my two test ESX servers were no longer in my inventory and I had to re-add them. Any ideas on where I could have gone wrong? When I ultimately upgrade production, I do NOT want to lose my configuration and especially my historal performance data. For those wondering, the database is SQL 2000 w/ SP4 and it resides on the same machine as VC. Any help would be greatly appreciated! -Craig
No dice.. test traps (via the homepage) work every time, but when I pull a power supply, I don't even see the trap come in. Any other ideas? -Craig
It's strange, If before SIM fault you was able to receive power supply and drive traps, and if your whatsup can get the test trap, everything should work. Try to execute the command: hpa... See more...
It's strange, If before SIM fault you was able to receive power supply and drive traps, and if your whatsup can get the test trap, everything should work. Try to execute the command: hpasm configure n your esx console, this execute the HP Agents conf. wizard, when asked, specify your trap destination, write and read community, enabled host and other appropriates values. To understand the traps you need to import HP Management Agents MIBs in whatsup too (you should find them in SIM CD/DVD). Hey! I think compiling the MIBs into WhatsUp worked, I can get the test trap 100% of the time now whereas before I had to kickstart the agent. I'm working from home today and don't have the ability to shut off power to this rack remotely.. will try tomorrow. Thanks a ton! I'll report back in the morning. -Craig
Hi All - Is anyone using something other than HP SIM to receive SNMP traps from their ESX hosts? Our SIM server took a gigantic dump today and it's going to be a few days before we can get ar... See more...
Hi All - Is anyone using something other than HP SIM to receive SNMP traps from their ESX hosts? Our SIM server took a gigantic dump today and it's going to be a few days before we can get around to reinstalling/reconfiguring it, so I'd like to just use our WhatsUp Gold (v11) server to notify us of power supply/disk/etc. failures. I've changed the SNMP trap destinations via the Systems Management Homepage (v7.8) and performed the test trap; this works flawlessly; WhatsUp receives the trap and notifies the appropriate party. However, when we yank a power supply or disk from the server, the machine never sends the traps. It seems like I must be missing something major here.... FWIW, the ESX host I'm attempting this on is a HP DL380 G3. Any help would be greatly appreciated! Thanks a ton! -Craig
Step 2 alone solved the issue I was having after upgrading to VC v2.0.2. I reconnected the two hosts that had issues installing the new VC agent, enabled DRS & HA and everything is good to go no... See more...
Step 2 alone solved the issue I was having after upgrading to VC v2.0.2. I reconnected the two hosts that had issues installing the new VC agent, enabled DRS & HA and everything is good to go now. Thanks a ton! -Craig Try the following things: 1. Disable HA 2. On the failed esx host goto the service console service mgmt-vmware restart service vmware-vpxa restart 3. Check rpm -qa | grep vpxa should give you VMware-vpxy-2.0.2-50618 4. Enable HA 5. Reconfigure every esx Host for HA
Post general experiences here. Post any issues to the appropriate group as questons Our upgrade went OK. Same HA issues others have mentioned: HA not configured correctly, must remove H... See more...
Post general experiences here. Post any issues to the appropriate group as questons Our upgrade went OK. Same HA issues others have mentioned: HA not configured correctly, must remove HA from cluster then add and everything is OK. Unresolved issue: 5 of my 2.x servers are showing 70-90+ memory usage and appear to be slowly climbing. They were in the green before the upgrade. I'll open this as another topic in the ESX 2.x forum. Had to play HA merry-go-round like the others in addition to having 4 of my 6 hosts not take the new VC agent on their own. I had to stop and start the management services then re-add them to their appropriate clusters. Seems to have taken OK besides that, but I was worried for a bit immediately after performing the update..... -Craig
Hi Folks - I know a number of you out there use the HP MSA1000 in some capacity, so I figured I'd pass along the word that the Active/Active firmware has FINALLY been released: http://h2000... See more...
Hi Folks - I know a number of you out there use the HP MSA1000 in some capacity, so I figured I'd pass along the word that the Active/Active firmware has FINALLY been released: http://h20000.www2.hp.com/bizsupport/TechSupport/SoftwareDescription.jsp?lang=en&cc=us&prodTypeId=12169&prodSeriesId=377751&swItem=co-47061-1&prodNameId=305215&swEnvOID=1005&swLang=13&taskId=135&mode=4&idx=0 Anyone have a spare unit to test this with? -Craig