VMware Cloud Community
totalstu1
Contributor
Contributor

Adding more drives to poweredge 2950 esxi server?

Hi,

We currently have a poweredge 2950 with a Raid 5 configuration (4x 140GB SAS Drives, 1 as hotspare) giving us about 270GB storage. We are out of space so we purchased another 4 drives. I put them in the machine but the question is what do I do next to get the storage added and recognized. I've searched but can't find instructions/guidance that help. Under storage adpapter I see vmhba1 under "MegaRaid SAS 1078 controller but nothing else. I've tried "rescan" but that appears to not do anything. Do I need to first reboot the server and log into the raid controller and do some configuring? If so, what would be next?

I'm pretty sure the answer is to shutdown the vm's, reboot the server and boot into the raid controller but any help/assistance would be greatly appreciated.

Thanks

0 Kudos
3 Replies
weinstein5
Immortal
Immortal

You cannot expand the storage of the existing RAID set and underlying VMFS datastore - what your to do is access the raid controller firmware/BIOS and you will need to create another RAID set which will be available to create a new VMFS data store -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
totalstu1
Contributor
Contributor

As I expected. Oh well, I have to find some time to take down the server. One of the VM's is an important server to have up so this may have to wait for a weekend.

Thanks for the help

0 Kudos
bulletprooffool
Champion
Champion

When attaching storage such as a MSA70 to an ESX3i host, ESX will not automatically format it or create the relevant partitions automatically. This can however be done from an SSH connection - using the following instructions.

1. Log on to the console or use putty to connect to the ESX host remotely. If you have not created a user for yourself you will not be able to log in through putty.

2. SU to root. This must be done using the su - root command. If you do not use the - then you will not get root's path and thus get error messages that say that commands cannot be found.

3. Run fdisk -l. This will give you a list of all of your current partitions. This is important because they are numbered. If you are using SCSI you should see that all partitions start with /dev/sda# where # is a number from 1 to what ever. Remember this list of number as you are going to be adding at least one more and will have to refer to the new partition by it's number.

4. Run fdisk /dev/sda. This will allow you to create a partition on the the first drive. If you have more than one SCSI drive (usually the case with more than one RAID container) then you will have to type the letter value for the device you wish to create the partition on (sdb, sdc, and so on).

5. You are now in the fdisk program. If you get confused type "m" for menu. This will list all of your options. There are a lot of them. You will be ignoring most of them.

6. Type "n". This will create a new partition. It will ask you for the starting cylinder. Unless you have a very good reason hit "enter" for default. The program will now offer you a second option that says ending cylinder. If you press enter you will select the rest of the space. In most cases this is what you want.

7. Once you have selected start and end cylinder you should get a success message. Now you must set the partition type or it's ID. This is option "t" on the menu.

8. Type "t". It will ask you for partition number. This is where that first fdisk is useful. You need to know what the new partition number is. It will be one more than the last number on fdisk. Type this number in.

9. You will now be prompted for the hex code for the partition type. You can also type "L" for a list of codes. The code you want is "fb". So type "fb" in the space. This will return that the partition has been changed to fb (unknown). That is what you want.

10. Now that you have configured everything you want to save it. To do so choose the "w" option to write the table to disk and exit.

11. Because the drive is being used by the console OS you will probably get an error that says "WARNING: Re-reading the partition table failed with error 16: device or resource busy." This is normal. You will need to reboot.

12. To reboot the server type "reboot" at the prompt.

13. Once you have rebooted you can now format the partition VMFS. DO NOT do this from the GUI. You must once again log into the console or remote in through putty.

14. Once you have su'd to root you must type in "vmkfstools -C vmfs3 /vmfs/devices/disks/vmhba0:0:0:#" Were # is the number of the new partition. You shoulder now get a "successfully created new volume" message. If you get an error you probably chose the wrong partition. Do an fdisk - l and choose the number with the "unknown" partition type. Note: IF you have more than one SCSI disk or more than one container the first 0 may need to be a 1 as well.

15. Go to the GUI and in configuration/storage select refresh. You should now see your new VMFS volume.

One day I will virtualise myself . . .