fsckit's Posts

Here is another way to do it that does not require SSH at all, just the vMA & the vi Perl Toolkit: vmcontrol.pl --server <vCenter> --username <UID> --vmname <vm> --operation shutdown
PuTTY is just another terminal program for connecting to remote servers via SSH (or other protocols).  This is not the place to learn how to use PuTTY or SSH.  In brief, you would already have an... See more...
PuTTY is just another terminal program for connecting to remote servers via SSH (or other protocols).  This is not the place to learn how to use PuTTY or SSH.  In brief, you would already have an open PuTTY terminal, with an SSH connection to your vMA.  From that shell prompt is where you would run your vmware-cmd command and then ssh in to all the other servers. Ideally you'd have ssh keys set up on the remote servers so that you don't have to type your password each time. 
Here is how I would do it, running commands from the vMA: 1.) vmware-cmd --server <esxi_hostname> -l 2.) loop though the above output, and ssh in to each VM, and issue the shutdown command.... See more...
Here is how I would do it, running commands from the vMA: 1.) vmware-cmd --server <esxi_hostname> -l 2.) loop though the above output, and ssh in to each VM, and issue the shutdown command. You would have no need to shut down the VM; it should show as powered down once the OS shuts down.
We are using some Linux Virtual Machines as NFS servers, sharing a few small file systems to 3 - 4 clients. I know this is not a common use for a VM, but we often have need to share files, and NF... See more...
We are using some Linux Virtual Machines as NFS servers, sharing a few small file systems to 3 - 4 clients. I know this is not a common use for a VM, but we often have need to share files, and NFS is quite convenient. (The file systems I am sharing are on SAN storage, and their performance is great. This is not a disk I/O issue.)  Problem is that the NFS performance is highly variable.  I cannot determine what is causing the high degree of variance in the NFS performance; it is nothing obvious like network or I/O load on the ESXi host or the NFS client or server.  I've used NFS often on physical servers and I know how to tune it.  On VMware I'm just not seeing what is causing the slow down.  A batch job can go from taking 20 seconds to over 5 minutes.  A recursive listing of the NFS file system takes anywhere between 4 seconds and 18 seconds. I do not encounter this degree of variance when using physical NFS servers. The problem seems specific to NFS rather than network. I use netcat to test TCP performance between NFS clients and server, and it is good, without much deviation. Any ideas on what I can look at?   Thanks.
Actually the vSphere SDK for Perl should be able to do this. In fact, one of the included William Lam scripts, vmdkManagement.pl, does exactly this. vmdkManagement.pl --server <vCenter> --user... See more...
Actually the vSphere SDK for Perl should be able to do this. In fact, one of the included William Lam scripts, vmdkManagement.pl, does exactly this. vmdkManagement.pl --server <vCenter> --username <me> --operation add --vmdkname <filename.vmdk> --vmname <VM> --datastore <datastore> Problem is, it always adds the disk to controller 0. When I edit the Perl script to use controller 1 (or any other controller), it fails.
Thank you. Yes, I saw that KB and I was able to get "vim-cmd vmsvc/device.diskaddexisting" to work. However, if possible I'd rather do this with remote CLI commands. If I can avoid it, I don't wa... See more...
Thank you. Yes, I saw that KB and I was able to get "vim-cmd vmsvc/device.diskaddexisting" to work. However, if possible I'd rather do this with remote CLI commands. If I can avoid it, I don't want to have to activate ssh on the ESXi host and log in to run vim-cmd commands. I'm running vmkfstools from a vMA.  Know of any remote CLI command that hot-adds virtual disks?
I can use ' vmkfstools --createvirtualdisk <size>G --diskformat thin <location> ' to create a new virtual disk (*.vmdk) file, but how do I get the VM to recognize this new disk in its configurati... See more...
I can use ' vmkfstools --createvirtualdisk <size>G --diskformat thin <location> ' to create a new virtual disk (*.vmdk) file, but how do I get the VM to recognize this new disk in its configuration?  I need to do this through the cli. I can do it in vSphere Client in Edit Settings, Add, Hard Disk, "Use an existing virtual disk".  Anyone successfully do this using esxcli or other CLI commands, or in the Perl SDK?   Thanks.
>There is no alarm. There is a default alarm called, "Insufficient vSphere HA failover resources".  I was wondering why this did not get triggered, based on the state of the Resource Distribut... See more...
>There is no alarm. There is a default alarm called, "Insufficient vSphere HA failover resources".  I was wondering why this did not get triggered, based on the state of the Resource Distribution chart. I think you answered this. I was not aware of these "flings", so thank you for pointing them out to me, though I would never be allowed to install one in this particular environment.
Thank you for the response. No, it does not answer my questions, but you have provided some insight into this issue. I do not have a HA widget. Perhaps vSphere Client 5.1 for Windows does not ... See more...
Thank you for the response. No, it does not answer my questions, but you have provided some insight into this issue. I do not have a HA widget. Perhaps vSphere Client 5.1 for Windows does not have this feature.  I do see this: Based on this, and the lack of the alarm, we can assume that if I brought down one of the hosts in this 2-host cluster, all the VMs would be able to start on the single remaining host, correct? And the alarm I refer to in this thread's title will only get triggered when that 98% goes to 50% or below, correct? I am still concerned about that resource distribution chart for Memory, though. It looks like I could push it up to 100% on both hosts, and still not push memory failover capacity lower than 50%. I presume this is due to the fact that my VMs don't have reserved memory, so vSphere only counts the minimal amount of memory required to start the VM, and it will depend on swapping and ballooning if all the VMs actually start using all their memory.   So the alarm I'm looking for is one that replicates the resource distribution chart, and alerts when the total unutilized memory in the cluster is less than the total memory of a single host.  Make sense?
Does this alarm only get triggered when an HA action fails?  I have a cluster of two ESXi 5.0 hosts, and since I am using way more than 50% of the available memory on each host, to me it looks li... See more...
Does this alarm only get triggered when an HA action fails?  I have a cluster of two ESXi 5.0 hosts, and since I am using way more than 50% of the available memory on each host, to me it looks like I cannot tolerate a single host failure. HA and DRS are enabled on the cluster, and in "Admission Control Policy" I have "Percent of cluster resources reserved as failover" set at 50%. So why no alarm?  My goal is to get an alert before we reach the state we're in now, where we apparently cannot tolerate a host failure.
Anyone else having storage vmotions fail on EMC storage with VAAI enabled? We added some new EMC disks on a different SYMMETRIX frame, and unless I disable VAAI, I cannot migrate data to these di... See more...
Anyone else having storage vmotions fail on EMC storage with VAAI enabled? We added some new EMC disks on a different SYMMETRIX frame, and unless I disable VAAI, I cannot migrate data to these disks. I get "Timed out waiting for migration data". If I disable these three VAAI parameters, it works, though the storage migrations do take longer: VMFS3.HardwareAcceleratedLocking DataMover.HardwareAcceleratedMove DataMover.HardwareAcceleratedInit Running 'esxcli storage core device vaai status get' shows everything supported on the new disks:    VAAI Plugin Name:     ATS Status: supported     Clone Status: supported     Zero Status: supported     Delete Status: supported The existing disks on the old EMC frame do not suffer from this problem. I can leave VAAI enabled and migrate to them just fine. Obviously I am working with EMC to try to determine what's going on, but I wondered if anyone else has seen this.
Thanks, yes I did try this, and it failed and I can't recall why..  I'll have to try it again. It is very tedious though, with so many disks.
Thanks. I did look in Virtual Machine Settings for the Datastore Cluster, and the checkbox for "Keep VMDKs together" is not checked for any of the VMs. You are correct that I could go into adv... See more...
Thanks. I did look in Virtual Machine Settings for the Datastore Cluster, and the checkbox for "Keep VMDKs together" is not checked for any of the VMs. You are correct that I could go into advanced settings and specify a separate destination for each of my 50 disks.  I would think that there should be a way to specify the whole datastore cluster though. 
Thanks. No help there. I was looking that page earlier today. Looks ike the author is using a more recent Web-based vSphere client and there might be some options that I don't have in Windows vSp... See more...
Thanks. No help there. I was looking that page earlier today. Looks ike the author is using a more recent Web-based vSphere client and there might be some options that I don't have in Windows vSphere Client 5.1.
vSphere Client 5.1 / VMware ESXi 5.0 I have some very large VMs running Linux that I need to vMotion to another datastore cluster. For example, one of these monsters has about 50 virtual dis... See more...
vSphere Client 5.1 / VMware ESXi 5.0 I have some very large VMs running Linux that I need to vMotion to another datastore cluster. For example, one of these monsters has about 50 virtual disks of varying sizes, totaling close to 6TB. The disks are thin-provisioned, and only about half of that actually is used.  The largest virtual disks are 250GB. I added 30 new 2TB disks to the cluster, and made a new datastore cluster out of them. My problem is when I try to do the migration: I highlight a moster VM, chose Migrate, Change datastore Set the new datastore cluster as the destination, and click Next, then Finish. "vCenter was unable to find a suitable datastore to place the following virtual machine files for the reasons listed below." For every disk in the new datastore cluster I get a message that says, "Insufficient disk space on <datastore_name_XXX>". This is not true. Every one of those disks is empty. Looks like vSphere is trying to put all 6TB of the VM's disks on a single datastore in the new datastore cluster. Why would vSphere do that and how can I tell it to use multiple disks for the destination?  
"ESXi 5".  Just 5.0.  According to the document below, under "Storage DRS", it says "Datastores per datastore cluster:  32" http://www.vmware.com/pdf/vsphere5/r50/vsphere-50-configuration-maxi... See more...
"ESXi 5".  Just 5.0.  According to the document below, under "Storage DRS", it says "Datastores per datastore cluster:  32" http://www.vmware.com/pdf/vsphere5/r50/vsphere-50-configuration-maximums.pdf
We added a bunch of new disks to our cluster, and now the datastore cluster has close to 60 datastores.  We intend to remove half of these disks, but will we run into any problems moving the da... See more...
We added a bunch of new disks to our cluster, and now the datastore cluster has close to 60 datastores.  We intend to remove half of these disks, but will we run into any problems moving the data since we've exceeded the (apparently theoretical) 32 datastores limit for this datastore cluster? I see no alarms for the datastore cluster, and I wonder why we were allowed to add new datastores to the cluster if 32 is in fact the limit.
Thanks. Yes, I was reading that document. I had hoped to find something specific mentioning esxcli or the vMA.  I guess it uses the same port and protocol to connect to the host as the vSphere Cl... See more...
Thanks. Yes, I was reading that document. I had hoped to find something specific mentioning esxcli or the vMA.  I guess it uses the same port and protocol to connect to the host as the vSphere Client does.
Port 902 does not seem to be correct.  I cannot connect to that port even on the hosts where remote esxcli commands are working. 
Running remote esxcli commands from our vMA against some of my ESXi 5 hosts fails with: "Connect to <hostname> failed: Connection failure" If I use a Perl script on the vMA to make a TCP con... See more...
Running remote esxcli commands from our vMA against some of my ESXi 5 hosts fails with: "Connect to <hostname> failed: Connection failure" If I use a Perl script on the vMA to make a TCP connection on port 443 of the host, it fails.  (Most hosts pass this test.) Am I right to assume this is a network firewall issue?  I can connect to these hosts from our vCenter server just fine, and if I use vSphere Client to turn on SSH on these hosts, I can ssh from the vMA to the hosts just fine. I can run the esxcli commands on the hosts, just not remotely, from the vMA.  Anything I should check before I point fingers at the group that manages our firewalls?