BrownUK's Posts

Creates a Microsoft remote destop file (RDG) from your vcenter server so that you can import into Microsoft Remote Desptop manager or any other similar.  I could not find one anywhere, so I had t... See more...
Creates a Microsoft remote destop file (RDG) from your vcenter server so that you can import into Microsoft Remote Desptop manager or any other similar.  I could not find one anywhere, so I had to write it.
If you really only have 4 Cores do not think about using machines with more than 2 cores.  At the end of the Day Esxi has to run on something (the Hypervisor OS) so if you are using four cores th... See more...
If you really only have 4 Cores do not think about using machines with more than 2 cores.  At the end of the Day Esxi has to run on something (the Hypervisor OS) so if you are using four cores there is nothing left. If you have hyperthreading, make sure it is on. Don't use Shares unless there is ocntention, if there is contention get another host Create a machine with two CPU's, useone CPU if performance is OK leave it, if not increase to two. Add more machines until you are at 75% overall CPU. Give them as much memory as you can. leaving 1GB free.
So when you list the paths how many are displayed ??
Do CPU performance stats in windows have any bearing on what is really happening?, they never used to, has this changed in 5. If there is no contention i.e. the host system is not overloaded t... See more...
Do CPU performance stats in windows have any bearing on what is really happening?, they never used to, has this changed in 5. If there is no contention i.e. the host system is not overloaded then all machines are getting access to the maximum amount of reources so there is no need to optimise. If there is contention add more resource or limit the amount of resources based on shares so the more importatnt machines get more resources proprtionally. Your optimisations might end up "fighting" against vmware. Not a solution to your problem, just a few ideas
Looks like you need another host
Downgrade your network card to the E1000 and see if their is an improvement
Monitor some real transactions applied against the system.  Do this every 5 minutes.  The customers end to end repsonse time is the most important thing to monitor.  The CPU may be high but if th... See more...
Monitor some real transactions applied against the system.  Do this every 5 minutes.  The customers end to end repsonse time is the most important thing to monitor.  The CPU may be high but if the reposnse time is good, are you worried?
Vmware have confirmed this to be an issue.  In a contention disk situation RDM's will have much better access to the disk.  If you have a datastore with !0 VM's on and you have a machine with one... See more...
Vmware have confirmed this to be an issue.  In a contention disk situation RDM's will have much better access to the disk.  If you have a datastore with !0 VM's on and you have a machine with one RDM the RDM will get 10 times that amount of queue depth than one VM on the datastore
If you want to make sure that you always have enough memory and not overcommit, conservative I know, but some people might like it then you could use a script to set a memory reservation as a pro... See more...
If you want to make sure that you always have enough memory and not overcommit, conservative I know, but some people might like it then you could use a script to set a memory reservation as a proportion of the virtual memory that is being consumed. I have done this and it seems to work quite well. It guarantees each machine has enough memory to run effectively. The windows task manager filed commit charge total * 1.3 (after the machine has been running for some time) is also a good indicator of how much RAM to allocate to a VM Thanks Alastair Brown|Microsoft Engineer / Vsphere Architect| Produban UK Commercial alastair.brown@produban.co.uk<mailto:alastair.brown2@produban.co.uk> | +44 (0) 77985 80929 | +44 (0) 116 200 2565 Carlton Park, Narborough Leicester, LE19 0AL, UK
This statistic is a bit misleading, you only ACTIVELY touch 2GB of memory in some period of time, what you are touching may change over time. Also your server may allocate a lot more memory to... See more...
This statistic is a bit misleading, you only ACTIVELY touch 2GB of memory in some period of time, what you are touching may change over time. Also your server may allocate a lot more memory to run that it is touching, but if it needs it this becomes available. Look on the resource allocation of a Virtual Machine at the Unaccessedfigure, it may help Thanks Alastair Brown|Microsoft Engineer / Vsphere Architect| Produban UK Commercial alastair.brown@produban.co.uk<mailto:alastair.brown2@produban.co.uk> | +44 (0) 77985 80929 | +44 (0) 116 200 2565 Carlton Park, Narborough Leicester, LE19 0AL, UK
I have read a lot of the articles and hence my queries.  For datastores you would enable SIOC in case of the nosy neighbour problem to guarantee that all machines have equal rights to the queues ... See more...
I have read a lot of the articles and hence my queries.  For datastores you would enable SIOC in case of the nosy neighbour problem to guarantee that all machines have equal rights to the queues on the HBA. With RDM's which we are forced to use becuase of MSCS and Linux clustering there is no SIOC and nothing to stop a virtual machine getting given by default more access to the disks the more RDM's it has Let me make the problem as I see it a bit clearer:- 1 host 5 virtual machines, 4 with one VMDK on one datastore, 1 with one VMDK and 10 RDM's, just the one HBA. From what you have told me so far the 5 virtual machines are sharing a HBA with queuedepth 32, 32 slots to put scsi commands in for all 5 virtual machines.when accessing the datastore, correct? so each machine has roughly 5 slots in the queue to send out disk requests. The final machine has 10 RDM disks, each RDM disk has a queue on the HBA of 32, so just for the RDM disks there are 320 slots in the HBA queue for SCSI commands. This one machine with RDM's has therefore got a total of 325 slots in the various HBA queues available that it can make disk requests to it has 54 times as much potential capacity for disk requests than a machine that just uses a datastore. It seems to me that the potential for this machine to drag down the whole infrastructure could be a significant problem over which you have no control. Does this make sense and do you think that this could become a problem
Assuming these replies are right then some (mainly one) of the answers I find to be really worrying. We have 40 ish datastores that are in use and about 150 RDM's, assuming normal queudepth of... See more...
Assuming these replies are right then some (mainly one) of the answers I find to be really worrying. We have 40 ish datastores that are in use and about 150 RDM's, assuming normal queudepth of the HBA which we have not changed the virtual machines with RDM's have 4 times as many queue slots available for scsi commands. Surely this is quite a serious imbalance and could lead to problems with the RDM access overwhelming the disk subsystem to the detriment of the disks that are held on the datastore.
Bit of a strange title, I realise that an RDM is treated like a physical device but obvioulsy it is routed through the HBA.  What I am trying to find is an article that discusses this in terms of... See more...
Bit of a strange title, I realise that an RDM is treated like a physical device but obvioulsy it is routed through the HBA.  What I am trying to find is an article that discusses this in terms of the queues etc that any SCSI command would go through. For a normal datastore the queue depth on a HBA is 32, does an RDM for example have it's own queue in the same way as a datastore ?, is there also queueing on the virtual machine in a software HBA ? We have a lot of RDM's in out environment, a lot of Linux clustering and Windows MSCS, is there any limit on the number of RDM's that are supported on a host? in a datacenter? There is not a lot of information out there, that I can find, anybody got any decent links?  I have a lot of blanks to fill concerning RDM's Cheers Al
I am beginning to think that it is more appropriate to go thin and dedupe.  The main reason for thick and dedupe was so that somewhere along the way it was possible to roughly have some idea of h... See more...
I am beginning to think that it is more appropriate to go thin and dedupe.  The main reason for thick and dedupe was so that somewhere along the way it was possible to roughly have some idea of how much space could be used. I think I need to get my head round this so that it is possible to work it out.  It is actually even more complicated than this after speaking to our storage guy,  The Netapp stoage is also thin provisioned as well as being deduped So I just ned to work out how much thin, deduped,thin space we are actaully using or I may just look at the committed space as being the best guide. Cheers Chaps
We thick provisioned disks on our infrastructure as we thought it would enable us to more easily see if we were runnnig out of space and as the back end is deduped what would be the problem?  Hav... See more...
We thick provisioned disks on our infrastructure as we thought it would enable us to more easily see if we were runnnig out of space and as the back end is deduped what would be the problem?  Having Esxi thin provision the disk and then having the netapp deduplicate the storage as well seemed to be overkill. So if I migrate a server from one datastore to another and the disk is thick provisioned, deduped on the storage, is the drive expanded, copied and then eventually deduped ? So would it actually make more sense to thin provision on deduped storage? Hope this makes sense Cheers Al