That is definitely an interesting issue. My configuration (2) mirrored 36GB, and (3) 72GB in a RAID 5 with 4 GB of RAM. So far, I have not seen any performance issues. I did notice some spikes with CPU and RAM, but after installing the VMware tools and installation. It has performed nicely. Right now, I have two VMs, and will move 3 more over.
Question I have is how are you connected to your array? I have noticed in the past when I am connected to an array through the SCSI cable, I experience long boot times. When I switched over to a fiber connection, the boot times decreased dramatically. I found it out the controller was checking the drives in the array, and would spin up individually.
1) No, it was 4 processors on the old machine, 4 on the new VM after the move.
2) In the performance monitor the trend is very close, for guest OS the spikes follow the host OS CPU usage spikes tightly, on all virtual cores.
3) Again, on the graph, the guest and host both show the spikes. Chicken or Egg? It's a sql server that's also accepting a lot of write and read filesystem requests. No other VM's on the machine, this is the only one, no other processes involved. also, I note that when I ping the host adapter, I get a steady and quick reply to each ping, but when I ping the guest IP, the reply times are erratic, and follow the pausing of the mouse and keyboard input, so I suspect it's solely the guest that has the issue.
One theory I'm working with is that when I did the conversion, I used VMware converter to not only move the VM but increase the size of both volumes: System volume from 4GB to 12GB, and data volume from 120GB to 800GB.
I'm working on the theory of the guest OS not being able to write to the new file structure unless I do some disk maintenance at the guest OS level. I didn't check the VMswitch config. I should do that also... good ideas!
Hello, all -
Thanks for the suggestions. I worked with VMware Tech support on this, and we made some memory resource changes but that didn't solve the issue. What finally corrected it was changing the guest OS's CPU from a Multiprocessor HAL to a Uniprocessor ACPI HAL. Originally, the intent was to "toggle" the HAL from Multi to single, and back to Multi. He had seen this correct similar issues in the past. As soon as we made this change to single core and rebooted, CPU resource consumption dropped from the previous average 75% spikey performance down to 8 - 10%, and performance was back to normal. Since this corrected the issue immediately I didn't bother returning it to multicore.
His theory on this was that while the guest OS was originally using 4 CPU's, the ESX hypervisor and guest were wasting a lot of resources trying to load balance processes that didn't need to be load balanced. By switching to uniproc, you eliminate all of that overhead. By the way, this was a SQL server used for capturing data from a foreign system's "syslogs" so-to-speak, so there are about 15 interface processes capturing data and SQL itself importing it all. Now, the system hums along with resources to spare.
I also have some 2650's hanging around with PERC3/di RAID, 3gb ram, dual Xeon 2.4ghz processors, when I burn the iso (VMware-VMvisor-InstallerCD-3.5.0_Update_2-110271.i386.iso) and then boot from it on the server, I see the VMware VMvisor Boot menu, then it autocontinues and hangs on "Loading VMware ISO" black screen with a white box and some kind of progress bar at the bottom, and that is where it sits...forever....anyone willing to share BIOS Setting or anything that might help?
Thanks in advance!
Was there any further sharing of BIOS and/or configuration settings, because my Dell 2650, Dual XEON, 3GB, Perc 3D/i has the exact same problem as the OP above, during the ISO install process:
The progress bar moves along for about 3-4 minutes and all looks good, then a line or two of random ASCII characters appear along the bottom of the screen, along with a 'boot:' prompt. This disappears after a further minute or two and returns back to the VMware ISO screen, asking you to press a key to start the install.. and round and round it goes.
I've checked the CD and it all seems OK..
EDIT: Problem solved. Turns out the BIOS setting for OS install had been enabled; Once disabled it finised the install OK. After the reboot, it went through the motions before stalling again for 10 minutes or so at the Starting screen, Loading ipmi_si_drv ...
Gave up waiting and pressed CTRLALTD... just a split second before I pressed the Del key, it carried on the install process. Weird, it was like it was waiting for a keystroke or something to continue... anyways, all sorted and it's up and running.
We have some old 2650's i'd like to use in our lab running esxi, but my question is, are you running from local disk or a USB?
Thanks, I'd like to use the USB but I can imagine that might not run on this hardware.
I'll let you know how it works out if we go this route.
I see you are able to run ESX 3i on Poweredge. May be you can assist me. I have 2 PE 2650 servers with "Broadcom Corporation NetXtrem BCM5703 Gigabit Network" cards. I am able to install esx 3.5 and ESX3i but cant seem to get these servers to communicate to network. My setup is explained here http://communities.vmware.com/message/1284782#1284782 .
Is it possible for you to assist me. I am at a stage that i might get rid of these server if they cant connect to network.
Your server is not on the Hardware compatibility list. Some people have been successful installing it. If you are having problems with just the networking I would check http://www.vm-help.com/forum/ to see if someone can help with customized installation.