VMware Cloud Community
san_himmath
Contributor
Contributor

Dell PowerEdge 2900 and SAS disks

For the ESX 3.0.1, the SCL states that the Dell Poweredge 2900 is supported but does not explicitly state that the SAS drives are supported (in fact most of the VMWare docs state support only for SCSI, iSCSI and NAS). It seems you can buy a Poweredge 2900 only with SAS disks. So I have a couple of questions:

\- Will ESX 3.0.1 work with the SAS disks in the PE 2900?

\- Can I use dual ported SAS disks as "shared storage" between 2 ESX servers and use this shared storage for testing various features that need shared storage (such as VMotion, DRS etc).

\- if the above is possible how does one do that?

We want to do this only to try out the features.

Reply
0 Kudos
11 Replies
RParker
Immortal
Immortal

We have exclusive use of Dell PE servers, so I know it works. I assume that you mean there isn't mention of Serial Attached Storage, but they are connected via the PERC 5e/Di RAID internal card. So SCSI is essentially how they connect, they are still SCSI cards.

- Can I use dual ported SAS disks as "shared storage"

between 2 ESX servers and use this shared storage for

testing various features that need shared storage

(such as VMotion, DRS etc).

Not sure what this is. Haven't tried this feature.

- if the above is possible how does one do that?

We want to do this only to try out the features.- Will ESX 3.0.1 work with the SAS disks in the PE

2900?

Reply
0 Kudos
Steve_Mc
Hot Shot
Hot Shot

The SAS drives are supported for creating VMFS because the Raid controller they're connected to is on the HCL (PERC5...).

The only way you'd get SAS as shared would be to have them in an external enclosure that could connect to more than 1 host.

Currently Dell has MD1000 and MD3000 that both support SAS, but neither is yet supported for ESX[/u]. May work but not yet supported.

Steve

Reply
0 Kudos
mprigge
Enthusiast
Enthusiast

As others have stated, you should be fine with the SAS disks in that box because the RAID controller is supported.

Shared SCSI is not supported in ESX, but that doesn't necessarily mean it won't work. I think there are a few people on here who have said they've gotten HP MSA500G2 arrays to work (they're standard dual-attach SCSI arrays), but VMware obviously wont support that configuration. With ESX, it really cares whether the controller is supported, not so much what the SCSI devices behind them are. You may have problems with them if the SCSI implementation doesn't behave like FC/iSCSI in terms of reservations or whatnot. So - good chance it will work for testing, but you may want to opt for a different, supported setup for testing if you're planning on investing a lot of money in a dual-attach SAS shelf.

Reply
0 Kudos
san_himmath
Contributor
Contributor

Okay, all of the replies assume that I am using the Perc 5/i card that supports RAID. But the card I get for the base price of $1199 is "SAS 5/i Integrated, No RAID" which is probably not supported. Does anybody know if this works or not? Also will dual ported SAS disk (a disk connected to 2 such hosts) work with this card?

Sanjay

Reply
0 Kudos
RParker
Immortal
Immortal

There are 2 types of RAID, software RAID and hardware RAID. If you have a SCSI card that supports hardware RAID you configure the RAID in the controller setup (and thus utilize the onboard memory and faster RAID) and this is handled by the BIOS on the SCSI/RAID card.

A Software RAID is setup at the time you install an OS (ESX will support this creation) and when you install it, it should see ALL available space, and it will RAID it for you.

I know I have a few machines that have SCSI with no RAID, and they were able to be setup with RAID no problem. So it should function this way.

You can't share the drives (according to previous posts). I don't think that SAS shared between 2 hosts will work, ESX needs exclusive access to a drive.

Reply
0 Kudos
Steve_Mc
Hot Shot
Hot Shot

"Okay, all of the replies assume that I am using the Perc 5/i card that supports RAID. But the card I get for the base price of $1199 is "SAS 5/i Integrated, No RAID" which is probably not supported. Does anybody know if this works or not?"

Ok, sounds like you didn't get the raid controller. If your controller is not on the HCL here: http://www.vmware.com/pdf/vi3_io_guide.pdf (approx pg 5) then it's not supported. Which sounds like your case.

May want to investigate PERC upgrade option.

"Also will dual ported SAS disk (a disk connected to 2 such hosts) work with this card? "

The SAS drives are dual ported because the same drives could be used in an external enclosure which has two cards for host connection.

Drives inside of one server have no way to connect to two servers.

Steve

Reply
0 Kudos
dlusk
Contributor
Contributor

SAS drives= Serial Attached SCSI-- Ie latest Scsi version, so instead of u640, we are at SAS1. with 2 and 3 comming.

http://www.vmware.com/vmtn/resources/698

Poweredge 840, with the SAS 5/ir adapter. SO not the perc controller but the $200 Sas adapter. (raid 1 / 10 i believe) not a true raid controller, but like a newer gen, sata/sas controller.

To answer your question, yes with the poweredge 2900.

we have several 2950's with sas drives, and the perc 5i raid controllers, and they rock. Much better than i had initially expected. We just got a new 2950 with dual quad-core cpu's. (actually $50 cheaper this route, then going with dual 5160's.) Alls SAS drives, - installs like a charm, just waiting to put into production to see how vm's react.

Reply
0 Kudos
uqbusiness
Contributor
Contributor

Hi all

We're currently in the process of setting up a Disaster Recovery solution, which consists of a daily snapshots of our VMs copied to a DELL PowerEdge 2900 running at a backup site.

The PE2900 has 2 x 73GB internal SCSI disks (which are RAID-1 mirrored), on which ESX 3.01 was installed.

I've then added a new riser card and then a DELL PERC 5/I (PCI-X x8) card. Attached to the new card is a DELL MD1000 chasis, which is populated with 15 x 500GB SATA disks.

During Boot, I've gone into the BIOS and configured 3 disks as hot spares, and assigned the other 12 as a single RAID-5 container (which is about 5TB).

My specific problem is trying to get ESX to recognise this new space.

If I bring up the Virtual Inftrastructure Client, select the ESX server and go to the Configuration tab, then the "Hardware - Storage Adaptor" section, I see 3 adapters:

\* PowerEdge Expandable RAID Controller 5 - vmhba1

\* PowerEdge Expandable RAID Controller 4E/4SI/DI - vmhba0

\* iSCSI Software Adapter

If I click on vmhba0, I see 1 target:

Path: vmhba0:0:0 Capacity: 68GB, LUN: 0

If I click on vmhba1, I see 2 targets:

Path: vmhba1:0:0 Capacity: 0.00B LUN: 0

Path: vmhba1:256:0 Capacity: \[blankspace] LUN: 0

I've used the "Rescan" and "Refresh" commands on the "Storage Adapters" and "Storage (SCSI, SAN and NFS)" panes, but no luck.

When I use the "Add Storage" link, It only presents vmhba0:0:0 as an location which can be added.

If I log on to the ESX service console, I see that via "dmesg" that it recognises disks sda (the internal SCSI disks) and sdb (the external array - presumably).

When I use "fdisk", I see that sdb is only 2199.0GB, which doesn't seem quite right.

I've tried deleting the RAID container and remaking it. I've tried partitioning sdb, and even making file systems on it and mounting them. So, it would seem that ESX is able to at least see the storage and use it, but I can't get the VI Client to see a datastore.

So, the questions, starting with the worst case:

1. is what I am trying to do not supported, and we've wasted our money? (NOOOO!)

2. is there a limit of the disk size, and the 5TB is just too large? if so, then should I just make a smaller container?

3. am I just not following the correct procedure to do this? if so, can someone enlighten me on the steps to take so we can utilise all this storage?

cheers

/\ndy

Reply
0 Kudos
DCasota
Expert
Expert

Hi

SATA is not fully supported with ESX. There are some circumstances where it should possible. Have a look at following message:

http://www.vmware.com/community/thread.jspa?messageID=661537&#661537

Regards

Daniel

Reply
0 Kudos
christianZ
Champion
Champion

2. is there a limit of the disk size, and the 5TB is

just too large? if so, then should I just make a

smaller container?

Yes, as I remeber 2 TB is the max.

SATA won't be a problem here as connected to SAS controller.

The MD1000 isn't supported by Esx but I heard here it worked (check this:

http://www.vmware.com/community/thread.jspa?threadID=47802&tstart=0)

Reply
0 Kudos
uqbusiness
Contributor
Contributor

Hi Christian

Thank you for your reply.

You are correct. A single 5TB container is too large.

I have instead made 3 smaller containers which are 2TB, 2TB and 1TB respectively, and they all show up to ESX. So it looks like everything is okay.

I'm not entirely sure why the MD1000 has connectors on the back labelled "in" and "out", but anyway, I have it connected to my DELL PE2900 and it seems to be working for me.

cheers

Andy

Reply
0 Kudos