BjornJohansson's Posts

Thanks guys! I was more looking for a complete reference for kickstart file and vimsh similar to PowerCLI and vCLI: http://www.vmware.com/support/developer/PowerCLI/PowerCLI41/html/index.h... See more...
Thanks guys! I was more looking for a complete reference for kickstart file and vimsh similar to PowerCLI and vCLI: http://www.vmware.com/support/developer/PowerCLI/PowerCLI41/html/index.html http://www.vmware.com/support/developer/vcli/vcli41/doc/reference/index.html However, the links will work! /Björn Ps. There is a upcoming blogpost on ESXi Chronicles on unattended setup. http://blogs.vmware.com Ds.
Hello all, Is there a reference with all available installation script commands for ESXi 4.1? The setup guide contains just a limited numbers of commands for ks.cfg. Since I only have exper... See more...
Hello all, Is there a reference with all available installation script commands for ESXi 4.1? The setup guide contains just a limited numbers of commands for ks.cfg. Since I only have experience from manual installs of ESX Classic I wonder what options I have more than this: vmaccepteula rootpw --iscrypted xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx install cdrom network --addvmportgroup=true --device=vmnic0 --bootproto=static --ip=192.168.20.10 --netmask=255.255.255.0 --gateway=192.168.20.1 --nameserver=192.168.20.3,192.168.20.2 --hostname=esxi-test..domain.local autopart --firstdisk --overwritevmfs If you have more info or experience in this matter, I would greatly appreciate some input! Thanks! /Björn
Thank you guys, I appreciate your input! I will plan to do a real fail over for one LUN containing some test machines. Then primary reason for this is to practice fail back. For te... See more...
Thank you guys, I appreciate your input! I will plan to do a real fail over for one LUN containing some test machines. Then primary reason for this is to practice fail back. For test of recovery plans I've been a supporter for VLAN to do some testing in "the bubble". Unfortunately, in our case most of the critical systems rely on 3rd party communication lines that not easily can be re-directed into the test VLAN. We don't want to create a new problem with testing. /Björn
Ok thanks guys for your input! I'll go for storing vmdk with VM's until I really need an alternative solution. /Björn
FYI: I checked with a HP employee and apparently support matrix often lags behind. I can also confirm that latest EVA 4400 firmware 09522000 works with HP SRA 1.01. Not yet tested wi... See more...
FYI: I checked with a HP employee and apparently support matrix often lags behind. I can also confirm that latest EVA 4400 firmware 09522000 works with HP SRA 1.01. Not yet tested with SRM 4 and the new HP SRA (cannot remember that version). /Björn
Yep, same feature with Windows Vista/7. Anyway, there is a difference between VMware and Microsoft recommendations on disk space. My plan will be testing with a smaller system drive... See more...
Yep, same feature with Windows Vista/7. Anyway, there is a difference between VMware and Microsoft recommendations on disk space. My plan will be testing with a smaller system drive and expand it when necessary. /Björn
FYI: A clean 2008 R2 Standard with VMware Tools + Windows Update uses 8,61 GB in space. This makes me think that a 20-ish GB system disk should be enough for future updates, some managemen... See more...
FYI: A clean 2008 R2 Standard with VMware Tools + Windows Update uses 8,61 GB in space. This makes me think that a 20-ish GB system disk should be enough for future updates, some management apps (antivirus, monitoring agent), pagefile (depends of course on RAM) and Kernel memory dump. If not, it's easy to extend system drive while VM is running, but it's a bit more tricky to shrink it. Any comments? Tnx /Björn
Hello all, We are planning for deploying Windows Server 2008 R2 when it's supported by VMware. What size will you guys use for system drive? How do you generally do with drives on ... See more...
Hello all, We are planning for deploying Windows Server 2008 R2 when it's supported by VMware. What size will you guys use for system drive? How do you generally do with drives on 2008 servers? Put apps on a second drive or extend system drive? Looking at Guest Operating System installation Guide for Windows Server 2008 64-bit, a minimum of 24 GB is recommended. R2 will only ship as 64-bit so I guess that will be a good guess. Since we store our VMs on Fibre Channel drives that are replicated to our DR site we want to keep wasted space to a minimum. Would be interesting to hear how you manage disks on your 2008 (R2) servers! Thanks! /Björn
Hello all! I would like to know how you guys out there test DR plans with SRM. Of course, all runs their plans in test mode. But is this enough for your disaster recovery policy? We are... See more...
Hello all! I would like to know how you guys out there test DR plans with SRM. Of course, all runs their plans in test mode. But is this enough for your disaster recovery policy? We are discussing a full fail over of all Protection Groups. Sounds a bit dramatic but we want to test the whole infrastructure at the DR site. There are some business critical systems that will be very hard to get into the testbubble network. We want to make sure that we are actually able to work on our DR-site if the poo hits the fan. We have SLA on one hour for some systems. How do you test your recovery plans? How do you manage failback if you are doing a planned fail over? Please post! Cheers! /Björn
Yep, both with single .lic and multiple (vCenter and ESX separated). Please note that the license server could successfully read the license. In my scenario an reinstallation was the fastest so... See more...
Yep, both with single .lic and multiple (vCenter and ESX separated). Please note that the license server could successfully read the license. In my scenario an reinstallation was the fastest solution. /Björn
No problems after reinstall. The license server could read the file perfectly. Logs and status enquiry showed no problems at all, could see status and licensed products. My problem lied in that t... See more...
No problems after reinstall. The license server could read the file perfectly. Logs and status enquiry showed no problems at all, could see status and licensed products. My problem lied in that the files did not show up in infrastructure client. Settings were correct (this was during SRM implementation so I had two new identical vCenter Servers) No files were found. Thanks though
I can confirm that reinstall of vCenter (U5 in my case) solved the problem. Took like 10 mins, so if you don't have made any particular settings this is the way to go imo. /Björn ... See more...
I can confirm that reinstall of vCenter (U5 in my case) solved the problem. Took like 10 mins, so if you don't have made any particular settings this is the way to go imo. /Björn
Word from HP is that FW XCS 09522000 is currently supported.
Hi all! Ok, we have two HP EVA4400, one at each site replicating with continous access. We also have ESX 3.5 on both sites so Site Recovery Manager was the obvious choice. I read in ... See more...
Hi all! Ok, we have two HP EVA4400, one at each site replicating with continous access. We also have ESX 3.5 on both sites so Site Recovery Manager was the obvious choice. I read in HP Storage Replication Adapter (SRA) readme that only certain firmware and Command View Eva. We are running a higher version on our EVA4400 than supported and probably have a mandatory firmware upgrade coming up. Would be hard to believe that all SRM customers are not upgrading EVA firmware because of SRA support. Especially on 4400 that has huge firmware issues (check fixes in latest firmware update readme) Any word on this? Is it safe to upgrade firmware and command view eva? I rather run SRM with a unsupported (but working) configuration than my SAN on un-stable firmware... Please advice. Thanks! /Björn Devices supported • HP StorageWorks 8100 EVA Storage System (Firmware XCS v.6.2xx, 6.110) • HP StorageWorks 8000 EVA Storage System (Firmware XCS v.6.2xx, 6.110) • HP StorageWorks 6100 EVA Storage System (Firmware XCS v.6.2xx, 6.110) • HP StorageWorks 6000 EVA Storage System (Firmware XCS v.6.2xx, 6.110) • HP StorageWorks 4400 EVA Storage System (Firmware XCS v.6.2xx, 6.110, v.9.0xx) • HP StorageWorks 4100 EVA Storage System (Firmware XCS v.6.2xx, 6.110) • HP StorageWorks 4000 EVA Storage System (Firmware XCS v.6.2xx, 6.110)
Hi all! Ok, we have two HP EVA4400, one at each site replicating with continous access. We also have ESX 3.5 on both sites so Site Recovery Manager was the obvious choice. I read in ... See more...
Hi all! Ok, we have two HP EVA4400, one at each site replicating with continous access. We also have ESX 3.5 on both sites so Site Recovery Manager was the obvious choice. I read in HP Storage Replication Adapter (SRA) readme that only certain firmware and Command View Eva. We are running a higher version on our EVA4400 than supported and probably have a mandatory firmware upgrade coming up. Would be hard to believe that all SRM customers are not upgrading EVA firmware because of SRA support. Especially on 4400 that has huge firmware issues (check fixes in latest firmware update readme) Any word on this? Is it safe to upgrade firmware and command view eva? I rather run SRM with a unsupported (but working) configuration than my SAN on un-stable firmware... Please advice. Thanks! /Björn Devices supported • HP StorageWorks 8100 EVA Storage System (Firmware XCS v.6.2xx, 6.110) • HP StorageWorks 8000 EVA Storage System (Firmware XCS v.6.2xx, 6.110) • HP StorageWorks 6100 EVA Storage System (Firmware XCS v.6.2xx, 6.110) • HP StorageWorks 6000 EVA Storage System (Firmware XCS v.6.2xx, 6.110) • HP StorageWorks 4400 EVA Storage System (Firmware XCS v.6.2xx, 6.110, v.9.0xx) • HP StorageWorks 4100 EVA Storage System (Firmware XCS v.6.2xx, 6.110) • HP StorageWorks 4000 EVA Storage System (Firmware XCS v.6.2xx, 6.110)
Hello all, I have a question about a VM's different disks on different data stores. 65 VM's, most of them Windows Server 2003 2 x ESX 3.5 U4 cluster with DRS/HA (one cluster at H... See more...
Hello all, I have a question about a VM's different disks on different data stores. 65 VM's, most of them Windows Server 2003 2 x ESX 3.5 U4 cluster with DRS/HA (one cluster at HQ, one at DR-site) 2 x HP EVA 4400 with Continous Access to our DR-site Site Recovery Manager deployment is planned vSphere upgrade is planned as soon as SRM is supported for it Datastores are 250 gb and contains maximum 5-10 VM's All VM's vmdk files are stored in the same location Question: What are your view on adding a new virtual disk to a VM and store it on another datastore? The problem today is that most of the data stores are pretty much filled. If I want to add a 100 gb disk to a VM I need to do storage vmotion. But if I put the the disk on a another data store I would be all set. Also, I would get the possibility to add cheap FATA disk storage to archive areas on our file server. My fears is that administration will be more complex when VM's have their vmdk files scattered all over the SAN. Also that it possibly (?) could be problems with SRM or features in vSphere (FT for example). And possibly that performance and virtualization/storage overhead could be affected. Please, post your pro and cons. Thank you! /Björn
Hello, LunResignature is only necessary if you are presenting a Mirror of a LUN from the SAME SAN to the ESX hosts. Now Continuous Access may make both SANs appear as one but they really ar... See more...
Hello, LunResignature is only necessary if you are presenting a Mirror of a LUN from the SAME SAN to the ESX hosts. Now Continuous Access may make both SANs appear as one but they really are not. It depends if the LUN gets marked as a Mirror or not. If it is marked as a mirror then you need LunResignature enabled on conversion from a mirror LUN back to a regular LUN. </a> Hello Texiwill and thank you for your input! I think it's is considered a mirror. As far as I've understood when a LUN is replicated it has the same signature as the source. And therefore the ESX (or ESX's?) on our recovery site needs to write a new signature in order to access the VMFS on it. I did try to not to use the resigning setting but without luck. The problem is the previously mentioned problem with invalid VM's which I strongly suspect to be connected with this setting. So, do I need to do it on all ESX's or just one? Thx! Also I would like to thank all that contributed with input! Even though we made the decision with SAN level replication your input is relevant for the discussion! Thanks! /Björn
Thank you guys for your input! Can someone please confirm wheter LVM.EnableResignature 1 should be activated on one host? During the tests I had some problems after the tests with some V... See more...
Thank you guys for your input! Can someone please confirm wheter LVM.EnableResignature 1 should be activated on one host? During the tests I had some problems after the tests with some VM's turning unavailable. Can this be related to enabling on all hosts? Please see my old thread here: http://communities.vmware.com//thread/177263?tstart=0 vSeanClark, we currently have two clusters, one on each site. We have about 40 VM's on HQ. We have physical, SAN-attached servers that are critical. Therefore we use a storage level replication. Probably more expensive but opens up for a more enterprise class solultion like SRM.
Hello all, I would really appreciate some input on our Disaster Recovery Solution from a VMware standpoint. Hopefully we get VMware SRM, but we'll have to wait for our new SAN first. Until the... See more...
Hello all, I would really appreciate some input on our Disaster Recovery Solution from a VMware standpoint. Hopefully we get VMware SRM, but we'll have to wait for our new SAN first. Until then we have to rely to current solution. We have tried the plan below but I would like additional comments and suggestions. Is there any obvious technical problems with this setup? HQ HP C7000 + BL480c Blade servers EVA 3000 ESX 3.5 U3 (I know, not supported on EVA3000) DR Site HP C7000 + BL480c Blades EVA4000 ESX 3.5 U3 VC 2.5 U3 Syncronization is made with HP Continous Access over a 2 mbit fibre line. This is our plan activate the DR site, and fall back to HQ site. 1) Sync between sites are stopped and SAN APA is activated on DR Site 2) Present LUN "TestLUN" to DR Site ESX hosts 3) Activates LVM.EnableResignature 1 on all hosts and reboots 4) Adds Datastore called "snap-xxxxxxxxxxTestLUN" to ESX hosts 5) Browse to VM's .vmx and adds it to inventory 6) Starts VM and chose "keep indentifier" 7) Adds an text file on VM's desktop Shuts down the VM and reverse the process by activating CA sync Well that is basically it. Any thoughts? Thanks! /Björn