jkgraham's Posts

Thanks, I follow you all the way to step 6. We have no other ESX hosts. There are 3 ESX hosts that will me removed from the old VC2.0.2 environment and added into the VC2.5 enivronment. Fr... See more...
Thanks, I follow you all the way to step 6. We have no other ESX hosts. There are 3 ESX hosts that will me removed from the old VC2.0.2 environment and added into the VC2.5 enivronment. From what I understand this will not stop any of the virtual machines from running. They will just move management environments. There are no resources pools, custom alerts... The ESX boxes are running 3.0.2 and should be managable by VC2.5. Once the cluster is up and running on VC2.5 we will the update the ESX 3.0.2 hosts to 3.5 one at a time. Does this help clarifiy what we are doing? Thanks again!
Question on transistioning from VirtualCenter 2.0.2 to 2.5. We are not migrating the database as it has not been heavily modified. We are okay with losing performance history. We are starting w... See more...
Question on transistioning from VirtualCenter 2.0.2 to 2.5. We are not migrating the database as it has not been heavily modified. We are okay with losing performance history. We are starting with a new database. We have a new box running VirtualCenter 2.5 and a new license server. An empty cluster has been configured. All we need to do is transistion the ESX hosts that are currently being managed by VirtualCenter 2.0.2 to the VirtualCenter 2.5 box. Since this is an "upgrade" I want to go over the sequence of events for doing this transistion. We cannot have any virtual machines go down. Some reassurance would be great. Sequence below. 1. Disable HA and DRS on current cluster managed by VirtualCenter 2.0.2. 2. Remove each host using the"Remove" command from the VI Client 2.0.2. 3. Add hosts to VirtualCenter 2.5 Cluster using VI Client 2.5. This seems to be a simple procedure. No VM downtime should be expected. Am I missing anything here? Thanks
I will be adding another member to an ESX cluster. We are in the process of making the decision on hardware. In its current state the cluster has 3 identical hosts. Dell PowerEdge 2950s each wi... See more...
I will be adding another member to an ESX cluster. We are in the process of making the decision on hardware. In its current state the cluster has 3 identical hosts. Dell PowerEdge 2950s each with 2x Intel 5130 dual-core procs. I am looking into adding the next member with 2x Intel 53xx quad-core procs. All members will have 16GB of RAM. Based on the compatibility matrices I have seen, this should not be a problem for VMotion or HA. I am concerned about the the Mhz differences based on info I found in this thread http://communities.vmware.com/message/751706#751706 According to the thread it seems the best thing to do in this situation is to limit the vCPUs to the lowest common denominator in CPU speeds across the cluster. Unless of course, there are certain VMs that will absolutely need the extra Mhz from the newer procs. In this case we just deal with the consequences. I am sure others have run into this issue since the post I referenced. What are you guys doing? Second, I do not believe any CPU masking will be necessary for VMotion between the 51xx and 53xx processors. Am I correct on this? Last option is to just use the same processors we used in the current cluster members. I am just trying to get our best bang for the buck. Thanks for the help.
Currently some of the VMs have RDMs to iSCSI LUNs that are presented to the ESX hosts. No software initiators are being used in the VMs themeseleves. The only software initiator is at ESX. This s... See more...
Currently some of the VMs have RDMs to iSCSI LUNs that are presented to the ESX hosts. No software initiators are being used in the VMs themeseleves. The only software initiator is at ESX. This should not cause a problem correct?
Further info on this. Some of the virtual machines will have connections to the fiber channel LUNS (VMFS or RDM) and iSCSI LUNS (VMFS or RDM) at the same time. I don't see this as an issue but ju... See more...
Further info on this. Some of the virtual machines will have connections to the fiber channel LUNS (VMFS or RDM) and iSCSI LUNS (VMFS or RDM) at the same time. I don't see this as an issue but just want to throw it out there before I get started. Anyone doing this?
This is what I thought. We have been using iSCSI with NFS together for at least a year with no problems. Thanks for the reassurance.
We are migrating from an iSCSI SAN to a fiber channel SAN. Plan is to configure the fiber channel SAN. Present the storage to ESX and migrate the VMs from iSCSI to fiber channel. The VMs will be ... See more...
We are migrating from an iSCSI SAN to a fiber channel SAN. Plan is to configure the fiber channel SAN. Present the storage to ESX and migrate the VMs from iSCSI to fiber channel. The VMs will be shutdown before they are moved. Our SAN vendor thinks this will not be possible because ESX does not support datastores from iSCSI and Fiber Channel simultaneously. Is this true?
If you are looking for a cheap way to store ISO images get an old workstation. Install CentOS on it. Setup NFS on it a configure both of your ESX boxes to point to it as shared storage. This ... See more...
If you are looking for a cheap way to store ISO images get an old workstation. Install CentOS on it. Setup NFS on it a configure both of your ESX boxes to point to it as shared storage. This what I do. The I only have to rip ISO images once and they are accessable from all VMs. use the command "dd if=/dev/cdrom of=/whateverpathyouwant/cdrom.iso" to create ISO images I do not really even worry about the VMFS that is created locally. I only use it to temporarily store things.
Thanks all. Now to decide how many disks to eat up.
Check this thread http://communities.vmware.com/thread/74386?tstart=0
Thanks! The road you suggested is where I am leaning. I like the suggestion about dividing the LUNs across the RAID for scsi reservation concerns. I have googled plenty and reviewed the "... See more...
Thanks! The road you suggested is where I am leaning. I like the suggestion about dividing the LUNs across the RAID for scsi reservation concerns. I have googled plenty and reviewed the "Windows Internals" book and O'Riellys "Windows 2000 Performance" book but have not found much that discusses the base base I/O generated by the opeating system itself. Now, I am pretty sure this is somewhat dependent on what that operating system is being used for, but up for just the base OS I think the common I/O generated would be for paging to disk and any logging that needs to be done for general operation.
I have an environment looking at 50 virtual machines. 3 ESX boxes VMFS. The VM operating systems will be stored in ~12GB vmdk files. These VMDK files will only contain the OS and possibly the a... See more...
I have an environment looking at 50 virtual machines. 3 ESX boxes VMFS. The VM operating systems will be stored in ~12GB vmdk files. These VMDK files will only contain the OS and possibly the application installed (mostly SQL, IIS). No "data" will actually be stored on these vmdks. The site contents, databases and log files are stored elsewhere on the SAN. The drives on the SAN will be 300GB 15K. I am looking to put all these OS VMDKs on one RAID set either RAID5 4+1 or RAID 1/0 (6 drives). With the RAID5 I get the best bang for the buck, but feel the performance may suffer when getting 50-60 VMs. RAID 1/0 I use one more drive and loose more space due to mirroring, but get pretty good performance for small random writes. The questions are: What kind of I/O characteristics should I expect out of just the OS and the application (hopefully not paging to disk)? Will there be a bunch of small random writes from 50-60 VMs? Out of the two RAID options above which would you guys recommend? Thanks
This looks like it may be a problem with an application running in an IIS 6.0 Application Pool. Thanks for the help.
This is good to know. The reason I asked is on one box I see a very high count of Pages Input/sec (number of pages read from disk). This is very bursty, trends show a low average but spikes go in... See more...
This is good to know. The reason I asked is on one box I see a very high count of Pages Input/sec (number of pages read from disk). This is very bursty, trends show a low average but spikes go into the thousands. The committed bytes counter (virtual memory) is always well below the Available bytes counter (available pages in RAM ) and page file usage remains below 5%. It is puzzling to me. The application is producing a lot of Page faults/sec and these seem to be hard page faults based on the numbers that I am seeing from the performance counters, but no user is complaining about performance. ESX reports very similar results for Memory. No ballooning has occured and the Active memory is well below memory consumed and granted. Thanks for the help.
I understand there are some Windows performance counters that should be untrusted in the virtual environment. Specifically, cpu and memory counters. I am curious about the pages/sec, pages inp... See more...
I understand there are some Windows performance counters that should be untrusted in the virtual environment. Specifically, cpu and memory counters. I am curious about the pages/sec, pages inputs/sec and pages output/sec counters. My understanding is that these count the memory pages written to and read from disk. It would seem that these counters could be trusted because the VM thinks it is writing to a SCSI drive. Is this the case, can these counters be trusted from a guest OS?
The icf file at http://www.vmware.com/community/thread.jspa?threadID=73745&start=0&tstart=0 worked well. The results were more of what I expected. +++++++++++++++++++++++++++++++++++++++++++... See more...
The icf file at http://www.vmware.com/community/thread.jspa?threadID=73745&start=0&tstart=0 worked well. The results were more of what I expected. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ TABLE OF RESULTS VM on ESX / Celerra NS40 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ SERVER TYPE: ESX 3.0.1 CPU TYPE / NUMBER: VCPU / 1 HOST TYPE: Dell 2950, 16GB, 2x XEON 5140 2.327Ghz STORAGE TYPE / DISK NUMBER / RAID LEVEL: R1 2x15k FC Drives SAN TYPE / HBAs : iSCSI; ESX softw. initiator(only uses 1GB NIC), 2XGb NIC for iSCSI ################################################################################## TEST NAME-- Av. Resp. Time ms-- Av. IOs/sec -- -Av. MB/sec ---- ################################################################################## Max Throughput-100%Read........____44.2____..........___1356___.........__41.45____ RealLife-60%Rand-65%Read......_____99.81____..........___554___.........____4.33____ Max Throughput-50%Read........____44.18____..........___1305___.........____41.52___ Random-8k-70%Read............._____96.45____..........___553___.........____4.29____ Notes: Windows XP guest, Cisco 6509 VLAN, Host connected via Link Aggregation, Celerra Connected via link aggregation. No jumbo frames because of software initiatior.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ TABLE OF RESULTS VM on ESX / Celerra NS40 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ... See more...
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ TABLE OF RESULTS VM on ESX / Celerra NS40 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ SERVER TYPE: ESX 3.0.1 CPU TYPE / NUMBER: VCPU / 1 HOST TYPE: Dell 2950, 16GB, 2x XEON 5140 2.327Ghz STORAGE TYPE / DISK NUMBER / RAID LEVEL: R1 2x15k FC Drives SAN TYPE / HBAs : iSCSI; ESX softw. initiator(only uses 1GB NIC), 2XGb NIC for iSCSI ################################################################################## TEST NAME-- Av. Resp. Time ms-- Av. IOs/sec -- -Av. MB/sec ---- ################################################################################## Max Throughput-100%Read........____44.2____..........___1356___.........__41.45____ RealLife-60%Rand-65%Read......_____99.81____..........___554___.........____4.33____ Max Throughput-50%Read........____44.18____..........___1305___.........____41.52___ Random-8k-70%Read............._____96.45____..........___553___.........____4.29____ Notes: Windows XP guest, Cisco 6509 VLAN, Host connected via Link Aggregation, Celerra Connected via link aggregation. No jumbo frames because of software initiatior.
A few months ago I found some information about the results from IOMeter being incorrect when testing from a VM. I was unable to find that info again so I decided to post these results. The are d... See more...
A few months ago I found some information about the results from IOMeter being incorrect when testing from a VM. I was unable to find that info again so I decided to post these results. The are definitely incorrect and would like to know if anyone else has seen this. This is using the software ISCSI initiator. IO meter was run from Windows XP guest. The disk target was an unformatted vmdk. I ran the Access Specifications from the Quick Start section of the iometer users guide. This access specification was labeled (I/Os per second) Transfer Request Size 512bytes. 100% read 100% sequential Results: 31K IOs per second. 15.15 MBs per second. 0.0313 Avg IO response time I also ran the access specification labeled (Megabytes per second) 20K IOs per second. 1284.54MB sec 0.0477 Avg IO response time I feel these results are ridiculously high compared to other results I have seen on the forums. Any ideas on how to get these results to be something I can trust? Thanks
Read the compatibility guides here: http://www.vmware.com/support/pubs/vi_pubs.html SATA is not supported
What is doing your routing between VLANS? I am not sure if any of the switches you mention are layer 3 capable.