VMware Cloud Community
dlesser
Contributor
Contributor

ESXi Migration Problems

Hello,

Currently I have a 64Bit Windows Host that runs two VM's, both Windows Server 2003. I used the stand alone converter to convert the VM's to the proper format for ESXi. My problem occurs when I try to start the VM on the ESXi server. I get to see the screen where Windows is loading but it freezes and enver completes. In safe mode it won't boot either. Both machines have this problem.

I know my ESXi erver is not on the HCl nor is the RAID card but other posts report that it works

ESXi 4.0

Dell PowerEdge 1800

2xXeon CPU's

Dell CERC 1.5/6CH SATA RAID Card

I get no errors that I can see in either ESXi or the Guest OS. If I try to shut down the guest OS the ESXi host stops responding and sits at 95%. I then need to physically reboot the server to get it back.

Can anyone point me in a Direction to get this solved? I think I will try making a new VM from scratch to see if that works.

Thanks!!!

David

0 Kudos
5 Replies
LucasAlbers
Expert
Expert

I believe this is a raid driver issue. The vm is attempting to load a different raid driver than what was originally installed.

Do a logged boot mode, which should tell you which driver is failing to load.

0 Kudos
rwestCHA
Contributor
Contributor

On the ESXi server, hit CTRL-ALT-F1, type "unsupported" (it won't display) and hit enter. You will get a password prompt. Type the password to your ESXi server. At the prompt, type "tail /var/log/messages". You will probably see an endless loop of errors like:

Jun 10 22:39:40 vmkernel: 0:00:10:58.498 cpu0:4171)<3>aacraid: Host adapter abort request (4,0,0,0) - cmd 0x41000a016540 (0x28) - FAILED
Jun 10 22:39:40 vmkernel: 0:00:10:58.498 cpu0:4171)WARNING: SCSILinuxAbortCommands Failed, Driver AAC, for vmhba2
Jun 10 22:39:40 vmkernel: 0:00:10:58.498 cpu0:4171)<3>aacraid: Host adapter abort request (4,0,0,0) - cmd 0x41000a011f40 (0x28)
Jun 10 22:39:40 vmkernel: 0:00:10:58.498 cpu0:4171)<3>aacraid: Host adapter abort request (4,0,0,0) - cmd 0x41000a011f40 (0x28) - FAILED
Jun 10 22:39:40 vmkernel: 0:00:10:58.498 cpu0:4171)WARNING: SCSILinuxAbortCommands Failed, Driver AAC, for vmhba2
Jun 10 22:39:40 vmkernel: 0:00:10:58.498 cpu0:4171)<3>aacraid: Host adapter abort request (4,0,0,0) - cmd 0x41000a007d40 (0x28)
Jun 10 22:39:40 vmkernel: 0:00:10:58.498 cpu0:4171)<3>aacraid: Host adapter abort request (4,0,0,0) - cmd 0x41000a007d40 (0x28) - FAILED
Jun 10 22:39:40 vmkernel: 0:00:10:58.498 cpu0:4171)WARNING: SCSILinuxAbortCommands Failed, Driver AAC, for vmhba2

Either the RAID card needs a firmware update or it just isn't compatible. Smiley Sad

0 Kudos
shepnasty
Contributor
Contributor

I am having the same issue kind of. I have a Power Edge 1800 Server and after a while of being on I get some weird error about a CPU failure and it does a core dump to disk. You think this could still be something with the controller card? Everything installed find but if I leave it for a while the thing takes a dump... I converted my server vm's too from a 3.5 update 4 environment and when I boot one of them it takes forever for it to start up. I wonder if I built one from scratch that it all would run better... I am going to leave that server off for a while and see if the esx host dumps again. Please let me know if you updated the PERC firmware and if it made it better.

Thanks

0 Kudos
alphageek1975
Contributor
Contributor

David,

I just ran into the same problem with a Dell PowerEdge 850 (again, not on the HCL). I'm not sure what prompted me to turn off Hardware Virtualization in bios, but doing so allowed me to get past the error. I was then able to patch the system to 175625, turn HV back on with no further issues. Hope this helps!

Joshua Patterson

0 Kudos
Danno9
Contributor
Contributor

This is definitely a RAID-controller driver issue (see aacraid errors above).

This is an old thread, but I found it on search for issues with ESXi 4.0

and Dell servers using the CERC 1.5/6ch RAID controller. Figured I'd post

an answer in case others are having the same issues and using the same

search. I had a lot of SCSI errors in the logs pointing to an issue

with the aacraid driver. This is commonly accepted as a non-working

controller for ESXi and I just left it out of the server and used the

onboard SATA controller.

With the introduction of ESXi 4.1, it WORKS!!! On a whim, I

re-installed the CERC 1.5/6ch card (Adaptec 2610SA) and loaded a clean

install of ESXi 4.1 on the SATA drive connected to the onboard SATA

controller. Rebooted and it recognized the CERC 1.5/6ch controller and

the RAID-5 array I had made with 5 drives using the BIOS utility. I was

able to format a VMFS datastore and used the entire array for iSCSI

storage with OpenFiler (it would always hang up at this point with ESXi 4.0). Some benchmark testing shows the CERC

controller being slightly slower than the single SATA drive connected

to the onboard controller. However, after turning on write-caching, it

was about 10-15% faster.

Not super performance by any means, but I'm glad I was able to finally

use this card for creating a 2-TB volume for backing my VMs..

0 Kudos