VMware Cloud Community
t3kn0m0nk3y
Contributor
Contributor
Jump to solution

iSCSI Raw storage and VMDK issues

I'm not sure if this is the best place for this question, so let me know if another forum is better.

I have ESX4 setup on a server with an Equalogic box serving 500g drives over iSCSI. Right now I have a 500gb slice that I'm using solely for OS data and storing all the OS VM's on.

One of those VM's is a replacement for our mailserver on an Ubuntu 10.04 server that I want to run Zimbra on. In our current config we have a physical 400gb RAID 1 set up as the /opt directory for storing the zimbra install and datastore, this partition config makes I/O and backup issues a snap. I'd like to mimic this model, but I'm having some troubles.

From what I understand I have two options on the new VM server.  I can attach a raw disk via iSCSI through the vCenter menu's as a virtual attached drive, or I can config the iSCSI connector inside Ubuntu to access the disk directly.  I prefer the former because it makes things simple, but I run into an error when I try to attach the drive.

Basically it forces me to choose the location of the virtual datastore and I only have two options, the 147gb SAS on the physical ESX server or the 500gb iSCSI drive I've attached to vCenter for OS data. In both cases the error I get states that there is not enough space on the datastore for the additional 500gb added to the VM by attaching it.

So, is there a way to add that iSCSI drive to the VM without making it part of the VMDK files? Or do I have to attach via iSCSI inside of the Ubuntu VM?? It would be nice to take advantage of snapshots and such, but I can't see how to make that work unless i duplicate all the storage by making the OSdata drive large enough to accomidate the additional 500gb attachment to that VM, and likewise all my other VM's that have similiar needs. That seems impractical as there would litterally be duplicate data without the benefit of RAID or parrallel I/O.

Thanks in advance for any help/guidence.

0 Kudos
1 Solution

Accepted Solutions
vmroyale
Immortal
Immortal
Jump to solution

We covered this before, but are you absolutely sure that "OSDATA EQ iSCSI" is formatted with a 4MB block size?

Brian Atkinson | vExpert | VMTN Moderator | Author of "VCP5-DCV VMware Certified Professional-Data Center Virtualization on vSphere 5.5 Study Guide: VCP-550" | @vmroyale | http://vmroyale.com

View solution in original post

0 Kudos
16 Replies
vmroyale
Immortal
Immortal
Jump to solution

Hello.

Note: This discussion was moved from the Virtual Machine & Guest OS community to the VMware vSphere Storage community.

From what I understand I have two options on the new VM server.  I  can attach a raw disk via iSCSI through the vCenter menu's as a virtual  attached drive, or I can config the iSCSI connector inside Ubuntu to  access the disk directly.  I prefer the former because it makes things  simple, but I run into an error when I try to attach the drive.

Agreed - I prefer this approach usually as well.

Basically  it forces me to choose the location of the virtual datastore and I only  have two options, the 147gb SAS on the physical ESX server or the 500gb  iSCSI drive I've attached to vCenter for OS data. In both cases the  error I get states that there is not enough space on the datastore for  the additional 500gb added to the VM by attaching it.

What is the exact error message you are getting here?  It sounds like it is most likely the block size at issue here - check kb 1012384 for more information abou the workaround.

So, is there a way to add that iSCSI drive to the VM without making it  part of the VMDK files?

Adding it as a RDM will require a pointer file on the VMFS volume, but it won't consume any significant space on this volume unless you snapshot it.  To add the iSCSI volume without the use of a RDM, you will have to use the guest initiator.

It would be nice to take advantage of snapshots and such,  but I can't see how to make that work unless i duplicate all the storage  by making the OSdata drive large enough to accomidate the additional  500gb attachment to that VM, and likewise all my other VM's that have  similiar needs. That seems impractical as there would litterally be  duplicate data without the benefit of RAID or parrallel I/O.

If you create RDMs in virtual compatibility mode, you can use snapshots.  It really sounds like the VMFS block size is the problem you are facing here.

Good Luck!

Brian Atkinson | vExpert | VMTN Moderator | Author of "VCP5-DCV VMware Certified Professional-Data Center Virtualization on vSphere 5.5 Study Guide: VCP-550" | @vmroyale | http://vmroyale.com
t3kn0m0nk3y
Contributor
Contributor
Jump to solution

I see how the block size may factor in, but after reading that article I'm not sure how to proceed. I remember selecting block size when I created my original install of ESX as 4mb, but at no point in slicing up the Equalogic storage was there an opotion to set block size.

So is this an Equalogic question, or a vSphere tweak?  I'm not familiar enough with these technologies yet to know were to start on this.

0 Kudos
vmroyale
Immortal
Immortal
Jump to solution

There are no options for this on the EQ side.  The block size is an aspect of the VMFS volume.

A 4 MB block size should let you have up to a 1024 GB - 512 bytes sized disk though.

On your VMFS volumes, do you have free space or did you create the VMDK file the same (or almost the same) size as the VMFS volume?

Brian Atkinson | vExpert | VMTN Moderator | Author of "VCP5-DCV VMware Certified Professional-Data Center Virtualization on vSphere 5.5 Study Guide: VCP-550" | @vmroyale | http://vmroyale.com
0 Kudos
t3kn0m0nk3y
Contributor
Contributor
Jump to solution

I guess I'm not sure how to answer your question because I'm assuming you aren't asking the obvious.

The OSdata disk is 500gb which currently consists of 4 VM's about 40gb each Thin prov'd. This disk is an iSCSI virtual disk served from the EQ.

One of those 4 vm's is a 40gb OS that I'd like to add a 500gb iSCSI virtual disk also from the EQ to hold the zimbra datastore. I need this size to be flexible with datastore growth as it is currently 300+gb and growing worth of data.

At this time probably only about 1/3 of the individual OS VM virtual drives are full with data and zero of the 500gb intended for the mail store is used.

I'm not sure if that answered your question, but thats what I have so far.

0 Kudos
vmroyale
Immortal
Immortal
Jump to solution

The OSdata disk is 500gb which currently consists of 4 VM's about 40gb  each Thin prov'd. This disk is an iSCSI virtual disk served from the EQ.

So this is a VMFS volume, built off of a 500GB volume from the EQ?

And inside this VMFS volume, you have 4 VMs each about 40GB thin provisioned?

Do any of these VMs have snapshots?

What does the VMFS volume report for free space?

And finally, when you try to add the Raw Device Mapping to this VM, what is the exact error message that you receive?

Brian Atkinson | vExpert | VMTN Moderator | Author of "VCP5-DCV VMware Certified Professional-Data Center Virtualization on vSphere 5.5 Study Guide: VCP-550" | @vmroyale | http://vmroyale.com
0 Kudos
t3kn0m0nk3y
Contributor
Contributor
Jump to solution

Brian Atkinson wrote:

The OSdata disk is 500gb which currently consists of 4 VM's about 40gb  each Thin prov'd. This disk is an iSCSI virtual disk served from the EQ.

So this is a VMFS volume, built off of a 500GB volume from the EQ?

Yes.

And inside this VMFS volume, you have 4 VMs each about 40GB thin provisioned?

Yes.

Do any of these VMs have snapshots?

Not currently.

What does the VMFS volume report for free space?

446.74gb / 499.75gb

And finally, when you try to add the Raw Device Mapping to this VM, what is the exact error message that you receive?

File [OSDATA] SERVER/SERVER_1.vmdk is larger than the maximum size supported by datastore 'OSDATA

I get that 40gb+500gb > 499gb, but since I don't physically need to store both together I guess I'm looking for a workaround where I can deal with backing up the 500gb on it's own and manage snaps from EQ itself. I jsut want the 40gb snaps to have the mount point for /opt that refers to a physical drive like an sdb1 or similiar.

0 Kudos
vmroyale
Immortal
Immortal
Jump to solution

That is weird.  If your VMFS datastore is formatted with a 4MB block size, then this should work. I have similar setups.  See if kb 1029697 is of any help.

I  get that 40gb+500gb > 499gb, but since I don't physically need to  store both together I guess I'm looking for a workaround where I can  deal with backing up the 500gb on it's own and manage snaps from EQ  itself. I jsut want the 40gb snaps to have the mount point for /opt that  refers to a physical drive like an sdb1 or similiar.

You can add the 500GB volume to the guest with the iSCSI initiator and do this.  You could manage the snaps from the EQ, and the VM would have the mount point for /opt this way.  As with anything backup related, make sure you verify the result.

Brian Atkinson | vExpert | VMTN Moderator | Author of "VCP5-DCV VMware Certified Professional-Data Center Virtualization on vSphere 5.5 Study Guide: VCP-550" | @vmroyale | http://vmroyale.com
0 Kudos
t3kn0m0nk3y
Contributor
Contributor
Jump to solution

Sorry to be a pest, but no where in that article does it describe how to create the pointer described. It just says to do so. Is there information somewhere on how to do that??

It seems to be the easiest solution, just need some help in implementing.

0 Kudos
vmroyale
Immortal
Immortal
Jump to solution

When you use the vSphere client to add the Raw Device Mapping that will create the pointer.  This is not something you would do manually.  I referenced the kb more along the lines of seeing if the RESCAN operation might help.

Have you added these volumes (or any) and removed them, like the kb mentions?

Brian Atkinson | vExpert | VMTN Moderator | Author of "VCP5-DCV VMware Certified Professional-Data Center Virtualization on vSphere 5.5 Study Guide: VCP-550" | @vmroyale | http://vmroyale.com
0 Kudos
t3kn0m0nk3y
Contributor
Contributor
Jump to solution

I feel like I'm chasing my tail.

I don't see any option to do this anywhere in vSphere. 

I add a hard disk, choose Raw Device Mappings, select my EQ iSCSI drive, and from there I get exactly two options:

store with VM, or specify either the ESX server internal HD, or the OSDATA EQ iSCSI.

Either way, i can chose Physical (whichdoes not include in the snapshot) or Virtual which gives me the check box for independant disk management.

no matter how i choose it I get the exact same result. I've tried every combination and see no option related to a pointer or anything similiar.

I'm starting to go bald in one spot from all the hair I've torn out. lol.

0 Kudos
DSTAVERT
Immortal
Immortal
Jump to solution

When you use the wizard to create your new RDM disk it will automatically create the pointer VMDK for you.

-- David -- VMware Communities Moderator
0 Kudos
t3kn0m0nk3y
Contributor
Contributor
Jump to solution

DSTAVERT wrote:

When you use the wizard to create your new RDM disk it will automatically create the pointer VMDK for you.

I would love for that to happen. However the reason I am posting in the first place is that it is not. Or if it is, it is still failing on the error mentioned in the posts above. Either way, it is not working and I need a solution.

0 Kudos
vmroyale
Immortal
Immortal
Jump to solution

We covered this before, but are you absolutely sure that "OSDATA EQ iSCSI" is formatted with a 4MB block size?

Brian Atkinson | vExpert | VMTN Moderator | Author of "VCP5-DCV VMware Certified Professional-Data Center Virtualization on vSphere 5.5 Study Guide: VCP-550" | @vmroyale | http://vmroyale.com
0 Kudos
t3kn0m0nk3y
Contributor
Contributor
Jump to solution

To be honest I mis-interpreted the way you were asking that the first go round. I know that I installed ESX with a 4mb block size, but I didn't configure the EQ box, it came pre-configured from a third-party consultant. I went into two of the VM's and confirmed that they were 1mb block size.

So looks like I have to migrate my VM's off that EQ drive and delete/recreate it with the proper block size.  I'll try that and report back.

0 Kudos
gjacknow
Contributor
Contributor
Jump to solution

It really isn't who configured the EQL box but who configured the OS datastore.  When you ormat it you choose the block size.  Not sure what you meant by you went into the VM to see the block size but here is what I grabed as to how to see the block size of a datastore.

To determine the block size used by a datastore:
In VI/vSphere Client:
  1. Select an ESX host that contains the datastore.
  2. Click the Configuration tab.
  3. Click Storage.
  4. Click on the datastore.
  5. The block size is identified in the Details window under the Formatting subheading.

Greg J

0 Kudos
t3kn0m0nk3y
Contributor
Contributor
Jump to solution

The issue has been resolved.

I had to work backwards by deleting all the drive slices from my EQ, removing them from VMware, then with the help of a dell enterprise tech update the firmware and recreate them properly.

This time when I added them to back to vmware I was able to set the size appropriately to 8mb blocks instead of 1mb.  After that, I was able to attach the drive with a pointer as previously mentioned to the OS datastore.


Thanks to all that helped. It seemed to be a two part issue between the EQ and how the slices were attached to VMware.

0 Kudos