KFM's Posts

Hi HippoCoar‌ - just wanted to say many thanks for an excellent script. Many thanks also to those who started and subsequently added to the script. Just a FYI - I had a problem with line 130: ... See more...
Hi HippoCoar‌ - just wanted to say many thanks for an excellent script. Many thanks also to those who started and subsequently added to the script. Just a FYI - I had a problem with line 130: $dslist = Get-Datastore -VMHost $ESXiHost | Sort It wouldn't work if Get-Datastore returned only one datastore. So I modified it: $dslist = @(Get-Datastore -VMHost $ESXiHost | Sort) Now it works even if there is only one datastore (unlikely, but in my version I also feed the -Name parameter to Get-Datastore to filter the datastores returned - sometimes just one). Hope that helps somebody!
This is an interesting dilemma.  I'm in the same situation as StevenLancer. I have modified the required files and restarted the VMware Converter Worker service and have even restarted the ser... See more...
This is an interesting dilemma.  I'm in the same situation as StevenLancer. I have modified the required files and restarted the VMware Converter Worker service and have even restarted the server that hosts the VMware Converter Server and I still get pathetically low throughput. Now here's the catch.  Ever since ESXi was released, I've found the throughput through the management network has been abysmally slow.  Evidently so have other people as witnessed in these threads:  http://communities.vmware.com/thread/427466?start=0&tstart=0#427466 and http://communities.vmware.com/thread/168637?tstart=0.  Whilst the second link refers to ESXi 3.5 I believe the problem is still evident in later releases. So my question is, if the management network is terribly slow (~6-10 Megabytes/sec) then how is VMware Converter achieving such high throughput rates when you disable NFC?  AFAIK Converter uses the same vmkernel used for the management network to do its transfers through. To further muddy the waters, I have two clients with similar setups.  One has the management network on a distributed switch and one has the management network on a standard vSwitch. For the client with the management network on the distributed switch: File copy using SCP - ~8 MB/s File copy using datastore browser - ~25 MB/s For the client with the management network on a standard vSwitch: File copy using SCP - ~8 MB/s File copy using datastore browser - ~8 MB/s All four file copies use the same NICs attached to the vSwitch (distributed and standard) so I can definitely prove that the NICs assigned for management use are capable of higher throughputs. I'm at my wits end on how to speed up the throughput over the management network for the second client, specifically when using VMware Converter.  I have a number of P2Vs to perform with large volumes that I need to convert in the weeks coming up. Any help/ideas would be much appreciated.
gssieg wrote: So I've learned this week that vSphere 5.1 VmKernels no longer support multiple gateways.  Hi, sorry to hijack this thread but when you said the above (emphasis mine) it got ... See more...
gssieg wrote: So I've learned this week that vSphere 5.1 VmKernels no longer support multiple gateways.  Hi, sorry to hijack this thread but when you said the above (emphasis mine) it got my curiousity piqued.  Can you tell me where you read this?  It's not that I disbelieve you (the opposite really) but I need it to verify some of my findings at a client. I have found the following peculiarity between versions 5.0 and 5.1u1.  Notably: In version 5.0 I was able to ping any non-management VMkernel interface from any external network. For example: VMkernel interface IP address 1.1.1.1 (management traffic enabled, default gateway is configured with 1.1.1.254) VMkernel interface IP address 2.2.2.2 (For iSCSI.  Note that there is a default gateway that exists on an upstream switch so we can route out if required [for management purposes only however].) PC IP address 3.3.3.3 Ping would succeed from 3.3.3.3 to 1.1.1.1 Ping would succeed from 3.3.3.3 to 2.2.2.2 In version 5.1u1 I am now unable to ping from 3.3.3.3 to 2.2.2.2.  Ping would still work from 3.3.3.3 to 1.1.1.1. It looks like your statement that in 5.1 multiple gateways, or as I'd rather put it static routes, are no longer supported verifies the behaviour I am seeing.  Put another way, it looks like only the the vmkernel interface that is on the same subnet as the configured default gateway is able to be routed.  Not that it would matter normally as you wouldn't want to be routing IP based (NFS/iSCSI) storage traffic anyway. I just need to know if this is expected behaviour now in VMware ESXi 5.1 Update 1. Cheers, KFM
As far as everyone in this thread knows, the problem mentioned exists only when using the HP NC532i in combination with the Flex-10 interconnects. Any problems when using non-Flex-10 adapters ... See more...
As far as everyone in this thread knows, the problem mentioned exists only when using the HP NC532i in combination with the Flex-10 interconnects. Any problems when using non-Flex-10 adapters such as the two you mentioned would most likely not be related to this case nor the symptoms we're seeing when using the NC532i cards in a Flex-10 environment.
As far as I know, the problems only stem from using the NC532i Flex-10 adapter. As the NC364m is NOT a Flex-10 device, I doubt it exhibits the same problems.
Hi All, I have a question which I cannot find an answer to after many hour and days of Googling (maybe I'm just a crap Googler ). To keep things simple I'll abstract away most of the nitt... See more...
Hi All, I have a question which I cannot find an answer to after many hour and days of Googling (maybe I'm just a crap Googler ). To keep things simple I'll abstract away most of the nitty gritty details - it shouldn't make a difference.....so currently I have a vSphere 4u2 cluster which utilises iSCSI HP (LeftHand) P4500 nodes. The iSCSI network is physically separate from the management and VM networks. Basically my question is, what happens to a VM when the ESX server that it's running on suddenly loses (for whatever reason, cable pull, etc) its connection to the SAN? That is, it no longer sees any path to the storage? How can I move/restart this VM on a different ESX host that hasn't lost it's connection to the SAN? Bear in mind that whilst the iSCSI network is isolated, the management network & VM networks are not which means HA won't kick in. What I've found is that in this situation is that in vCenter, the VM's status remains "powered on" because for all intents and purposes it is, irrespective of the actual state of the OS in the VM. In addition, the actual OS remains in a frozen/suspended state - most likely the result of the VMX process running from memory. Now if I restore the iSCSI network to the SAN, the VM that was previously frozen/suspended magically comes to life as if nothing has happened! Even after 10+ minutes! What happens if for some reason, I am unable to restore the iSCSI network and instead want to start up the VMs on a different ESX server? Thing is I can't start the VM on a different server because it's still powered on. I can't unregister the Vm using vmware-cmd because it can't see the path to the vmx file on the LUN hosted by the storage! Attempting to VMotion the server off doesn't work either. So I'm at wits end. Has anyone else come across this situation? Am I missing something obvious here?
Interesting, bebman - thanks for your contribution, have just read the other similar thread and it looks like HP and VMware have a lot of frustrated customers! I think for now I will just stic... See more...
Interesting, bebman - thanks for your contribution, have just read the other similar thread and it looks like HP and VMware have a lot of frustrated customers! I think for now I will just stick with ESX 4u2 but with the older 1.48 driver. It didn't have smartlink/DCC but at least it didn't PSOD my hosts. In my environment, a PSOD was more likely to happen (due to our workload characteristics) than an uplink failing (and thus VMware failing over to the other active path). I don't even think I'll upgrade to ESX 4.1 yet - it seems more problematic and hopefully a new driver will come out before we require the new features of 4.1! Cheers, KFM
After VMware reviewed my vm-support logs, they came back to me saying "we have found that this is a known issue and we have reported this to our engineering team. We do not have a fix for this... See more...
After VMware reviewed my vm-support logs, they came back to me saying "we have found that this is a known issue and we have reported this to our engineering team. We do not have a fix for this issue as of now but as our engineering team is working on it , we will definitely have a fix for this in the next update." Major bummer. We've already rolled back to the older 1.48 driver on our production cluster but we've kept another two ESXi servers separate for testing. I've alreeady upgraded them to 4u2 in the vain hope that they may have fixed the issue. I'll keep the thread updated with my findings.
SmartLink with Link State tracking on the vSwitch works fine.
@DSeaman: Doh sums it up quite nicely! And now I wouldn't even say it's restricted to RTSP traffic as I had another server PSOD this morning - however this time the App-V server wasn't on it so... See more...
@DSeaman: Doh sums it up quite nicely! And now I wouldn't even say it's restricted to RTSP traffic as I had another server PSOD this morning - however this time the App-V server wasn't on it so I have to rule out RTSP being the cause on this one. PSOD still points to the bnx2x driver being the root cause. I've attached a screenshot of the PSOD so maybe someone can confirm my thinking? Btw, I have to mention that it's only when RTSP (if that is the cause) leaves the flex-10 modules that it causes the server to PSOD. By "leave" I mean that the App-V server is on a different subnet to the App-V client and hence traffice needs to leave the flex-10 module, hit the core switch/router then route to the appropriate subnet which the App-V client resides on. If I have the App-V server and client on the same portgroup/subnet/chassis then there is no problem so it all depends on your environment. But at the end of the day, I suspect that you'll want this resolved irrespective of what environment you have setup for App-V. As for the support ticket, am in the process of getting my details on my client's list of authorised-parties-to-log-a-fault-call-on-their-behalf. I'll update this thread as I work through the problem.
It's a shame that it's not actually a new version or build. I'm still having problems with the driver with RTSP traffic between a Microsoft App-V server and client. As soon as my client start... See more...
It's a shame that it's not actually a new version or build. I'm still having problems with the driver with RTSP traffic between a Microsoft App-V server and client. As soon as my client starts streaming the application from the server, the ESXi server PSODs with errors alluding to the bnx2x driver being at fault. The older 1.48 driver didn't have this problem, but it also doesn't support DCC/SmartLink either and I don't want to use beacon probing as a failover mechanism.
Hmm...tricky one this one... Without the advent of some application that will manipulate the View ADAM database and configuration to update the whereabouts of the replicated desktop VMs you ma... See more...
Hmm...tricky one this one... Without the advent of some application that will manipulate the View ADAM database and configuration to update the whereabouts of the replicated desktop VMs you may have to manually re-populate the View Connection server with the Individual desktop assignments. Whilst SRM can automatically re-inventory datastores, re-map networks etc, it can't (at this point in time and in my humble knowledge) re-write the View ADAM database/configuration file purely because now we're delving into the application layer in the VM, i.e we're no longer working at the VMware resource layer - if that makes sense? Having said that, it would be cool if a future release of SRM could do this In light of the above, I see a possible pseudo-solution (pseudo because it's not fully automated nor tested by me, but I'm just throwing up suggestions). In addition to the Standard server in the live site, create a View Replica server in the DR site. Add the DR VC server into View - you now have two VCs registered in View. This will now allow you to see the desktop VMs that you have re-inventoried when you fail over to DR. During DR, manually add the desktop VMs into View (you will be able to choose which VC server your desktops are registered to - choose the DR one obviously). Disable the desktop (or un-entitle) in the (now offline) live site and enable/entitle the DR desktops. Do DR stuff, get live site online etc etc When the live site comes back online, disable the DR desktop (or un-entitle) and enable/entitle the live site desktops and with luck the change should get replicated to the live site Connection server. If you can find a script that will automate the creation of pools and Individual Desktops that would ease things a lot....esp if you have a lot of desktop VMs to add back into View! KFM
@ Duncan Regarding the following line: /bin/sed -e 's/net\/vswitch\/child\[0001\]\/teamPolicy\/maxActive = \"1\"/net\/vswitch\/child\[0001\]\/teamPolicy\/maxActive = \"2\"/g' /tmp/esx.con... See more...
@ Duncan Regarding the following line: /bin/sed -e 's/net\/vswitch\/child\[0001\]\/teamPolicy\/maxActive = \"1\"/net\/vswitch\/child\[0001\]\/teamPolicy\/maxActive = \"2\"/g' /tmp/esx.conf.bak >> /etc/vmware/esx.conf Is there a reason (and does it make a difference) why you append to /etc/vmware/esx.conf as opposed to replacing it? Cheers, KFM Edit: Ah bugger, I shouldn't take things out of context. I totally missed the mv /etc/vmware/esx.conf /tmp/esx.conf.bak line - my bad, sorry!
I'm not sure if I agree entirely with Big Vern's post. My replies are blue in-line with big vern's original post. I should add that we're running ESX 3.0.2 with all patches installed. Th... See more...
I'm not sure if I agree entirely with Big Vern's post. My replies are blue in-line with big vern's original post. I should add that we're running ESX 3.0.2 with all patches installed. This may or may not account for any differences in experiences between Big Vern and myself. DO NOT use seperate SAN LUNs for disks on the same VM. If you do not use separate LUNS (whether they be DAS or SAN) for storing VMDK files that require different performance levels, then how exactly do you follow industry best practices of putting databases on RAID5 volumes, transaction logs on RAID10 volumes, for example? Surely you don't stick all the VMDK files on a single datastore which exhibits only one particular RAID level performance? I am annoyed that VMware have on this website a presentation suggesting you could do this - but their software doesn't work if you do. The reason is when you create a second disk for a given VM in a different LUN it calls the disk exactly the same as the first disk. You do not get the option when creating the sdisk to call it something different. I agree - the above part is correct. So what you might say. Well everything works until you try to revert to a snapshot, or use vcbmounter. They will not work - precisely because the disks are called the same. You have to go in and manually change the names of the vmdk's via the service console. The above part is not entirely correct, at least not that I have found. I've found that if you use vcbMounter it will use the SCSI BUS and ID numbers of the disks themselves as identifiers rather than the VMDK name since as you said, the name cannot be guaranteed for uniqueness. So for example I have scsi0-0-0-Test01.vmdk which represents the first HDD attached at SCSI 0:0 and scsi0-1-0-Test01.vmdk which is the second HDD attached at SCSI 0:1. Both these files are present in the destination directory that I pass to vcbMounter. Having said this, I would be interested to find out in what situation you were unable to get this working. More - if you have vmdks in different LUNS (even if you have manually made them unique) then when you snapshot - all changes are written to the LUN on which the first disk was created. This makes it difficult for cpacity palnning. The above part is correct. I guess snapshots are stored where the vmx file is also located, much like how the VM swap is also in the same location as the vmx file. Perhaps it would be nice to be able to store VMDK snapshots with the parent VMDK file rather than where the vmx file is located? >
All depends on what kind of performance you want for the VMDK files that reside on a particular datastore. For instance, if you had a set of VMDK files that were to be used for "high perfo... See more...
All depends on what kind of performance you want for the VMDK files that reside on a particular datastore. For instance, if you had a set of VMDK files that were to be used for "high performance" then you would create the VMDK files on a datastore which was created from a RAID10 LUN at the disk/SAN layer. If your VM required multiple disks of varying speed/performance, eg file server, then you may opt for the system disk to be on a RAID10 datastore, whilst the disk where files are located could live on a RAID5 datastore. Hence the VMDK files for this particular machine would reside on two different datastores. However if there is no requirement for different speeds/performance, then there would be negligible difference if you created one big VMDK file (which resided on the one datastore) and used Windows to partition it or if you created two separate VMDK files but both on the same datastore.
Whenever something funky goes wrong I tend to restart the mgmt-vwmare service. At the CLI, try typing "service mgmt-vmware restart" without the quotes.
Well as an update, this morning when I fired up the VM again (reverted back to a "pre-turning on after cloning" state) it actually sysprep'ed perfectly. However when I rolled back to a clean s... See more...
Well as an update, this morning when I fired up the VM again (reverted back to a "pre-turning on after cloning" state) it actually sysprep'ed perfectly. However when I rolled back to a clean snapshot and fired it up again, it failed again, and failed all the subsequent times I tried it after that. Any ideas people? Would be good to get this working for the SSO feature of certain VDI brokers.
Hi All, I was wondering if anyone has come across problems when customising Windows XP SP2 guest OSes. This is my scenario: I have a base Windows XP SP2 build. Not on the domain, se... See more...
Hi All, I was wondering if anyone has come across problems when customising Windows XP SP2 guest OSes. This is my scenario: I have a base Windows XP SP2 build. Not on the domain, set to DHCP & VMware Tools installed. Converted to a template. I created a customisation specification in VC with the following settings: Set the computer name to Use The Virtual Machine Name Typical network settings (ie, DHCP) Add computer to domain (and supplied credentials) Everything else default 1.1 and XP folders on the VC server contain the appropriate sysprep files Am using ESX 3.0.2 and VC 2.0.2 When I apply the customisation specifications to the new VM created from the template, I find that none of the specifications stick. The hostname is still the same as the original template and the computer never gets added to the AD domain. What I've found during my troubleshooting is that when I bring up a shell during the mini-setup/sysprep stage the VM never gets an IP address yet it can pick up a DHCP and DNS server if you type ipconfig /all. Then I found that the DHCP and DNS client services weren't started (I have no idea why not even though they are set to automatic) so I started those manually, after which I got an IP address, but still couldn't join the domain. The error message is "The domain could not be accessed due to networking problems". When I look in the c:\sysprep folder on the VM, it's missing all the files required for sysprep to work except for the sysprep.inf. If I sysprep the VM in the traditional manner (manually place the sysprep files in the sysprep folder and don't use VC customisation) then it all works fine. Customising Windows Server 2003 works fine too, it only seems to be Windows XP which doesn't work properly. Screenshot below TIA!
The only thing I noticed is that you're using UTC time when this article says to configure UTC=false http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=1... See more...
The only thing I noticed is that you're using UTC time when this article says to configure UTC=false http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=1436&sliceId=1&docTypeID=DT_KB_1_1&dialogID=2644773&stateId=0%200%202642565 And I assume you're following the instructions at http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=1339&sliceId=1&docTypeID=DT_KB_1_1&dialogID=2644773&stateId=0%200%202642565 to configure NTP
Not sure from what host you're copying a VM to your ESX server, or how you're doing the copying, but on my WinXP machine I frequently copy ISOs (or any arbitrarily large file) to my ESX servers u... See more...
Not sure from what host you're copying a VM to your ESX server, or how you're doing the copying, but on my WinXP machine I frequently copy ISOs (or any arbitrarily large file) to my ESX servers using a free product called FastSCP from Veeam. I've timed the copy using FastSCP and one using plain ol' SCP and the FastSCP leaves it dead in the water. http://www.veeam.com/veeam_fast_scp.asp No, I don't work for Veeam but I find the tool very useful cause it saves me from having to manually type "scp blahblah" Message was edited by: KFM