VMware Cloud Community
AlbertWT
Virtuoso
Virtuoso
Jump to solution

ESXi 3.5 into 4 upgrade problem in adding Existing VMFS partition on iSCSI SAN

Hi All,

I've got 2x Dell PowerEdge 2950-III directly attached with Dell MD3000i iSCSI-SAN and today I had a hard times in upgrading my production ESXi 3.5 u 4 into ESXi 4.0,

First is that the upgrade is failed miserably after following this thread: VMware Communities: error on host upgrade: this host is not compatible with this upgrade.[/url]

with the following Error message: ERROR: Unsupported boot disk

The boot device layout on the host does not support upgrade.

therefore I did a clean reinstall of ESXi 4 on top of ESXi 3.5 u 4 and lost every single settings on my ESXi host. I'm now stuck in mapping the iSCSI SAN partition after following the guide from: http://support.dell.com/support/edocs/software/eslvmwre/AdditionalDocs/cnfigrtnguide/storage_guide.pdf[/url]

see the following attachment.

I wonder how does people do their upgrade into ESXi 4.0 in production system that is running ESXi 3.5 u 4 filled with hundreds of VM ? The reason why I'm using 2x ESXi server connected into single SAN server is that I can just start the VM on the SAN from the other ESXi --> which is fine and tested, but then when i rebuild the failed server, I must delete the whole data inside the shared partition which contains my VMs.

So in conclusion after one of the ESXi Server failed, the iSCSI mapping would eventually destroy every single data in the partition ?

CMIIW, any idea and suggestion would be gladly appreciated

Kind Regards,

AWT

/* Please feel free to provide any comments or input you may have. */
Tags (4)
0 Kudos
1 Solution

Accepted Solutions
Andy_Banta
Hot Shot
Hot Shot
Jump to solution

From the Configuration -> Storage Adapters tab, you should be able to rescan for and find existing VMFS volumes. You shouldn't need to re-add the storage.

Andy

View solution in original post

0 Kudos
7 Replies
Andy_Banta
Hot Shot
Hot Shot
Jump to solution

From the Configuration -> Storage Adapters tab, you should be able to rescan for and find existing VMFS volumes. You shouldn't need to re-add the storage.

Andy

0 Kudos
AndreTheGiant
Immortal
Immortal
Jump to solution

Same thread:

Have you solved the problem using rescan?

Andre

**if you found this or any other answer useful please consider allocating points for helpful or correct answers

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
Josh26
Virtuoso
Virtuoso
Jump to solution

I wonder how does people do their upgrade into ESXi 4.0 in production system that is running ESXi 3.5 u 4 filled with hundreds of VM ? The reason why I'm using 2x ESXi server connected into single SAN server is that I can just start the VM on the SAN from the other ESXi --> which is fine and tested, but then when i rebuild the failed server, I must delete the whole data inside the shared partition which contains my VMs.

I don't see why. You can do a fresh install of ESXi 4.0 and keep VMFS partitions intact without issue.This doesn't require destroying data.

AlbertWT
Virtuoso
Virtuoso
Jump to solution

Thanks to all who reply to my thread,

The data inside ha partition is still

intact afte rescan Smiley Happy, actually If i Add new partition that is when

the problem started to occur (erasing the partition data).

Re-Scan the LUN solve the problem.

Cheers.

Kind Regards,

AWT

/* Please feel free to provide any comments or input you may have. */
0 Kudos
AndreTheGiant
Immortal
Immortal
Jump to solution

Re-Scan the LUN solve the problem.

Good for your.

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos
RogerG781
Contributor
Contributor
Jump to solution

Hello, i have a probelm with my storage.

I have a RAID 1 with 1TB Storage. On this storage is a Windows installed with 80GB NTFS and a VMFS with 850GB store. I`m booting the ESXi from a USB-Stick and the Windows from the Storage.

Its functionally fine. So i would like doing a USB-Stick with the ESXi 4. I do this with dd, but i have for one moment used the wrong partition and so the partition table is corrupted. The Vista start from the Storage, but the ESXi can not find the Storage. A Rescan doesn`t change the status. When i will add a new storage, so i see the 850gb partition with vmfs system. But the ESXi can not mount the storage. Can anyone help me? I have testet it with the ne VSphere Client, it doesn`t help. When i start a slax live cd and show the status of the partition i became a error "partition table has error-s"

What can i do, can i rescue my vmfs?

0 Kudos
DSTAVERT
Immortal
Immortal
Jump to solution

Have a look at http://sanbarrow.com/sickbay.html

-- David -- VMware Communities Moderator
0 Kudos