All Posts

Hi, I'm trying to figure out if ESX can support a Dell PowerVault 220S or a Dell PowerVault MD1000. I have googled and searched forums and doc for this information, and still can't find anythi... See more...
Hi, I'm trying to figure out if ESX can support a Dell PowerVault 220S or a Dell PowerVault MD1000. I have googled and searched forums and doc for this information, and still can't find anything about it... Thanks!
Just a bit of feedback regarding CLARiiONs and ESX best practices from EMCWorld 2006. EMC recommends that each meta LUN presented to ESX should not exceed 250GB. Would they tell me why when ... See more...
Just a bit of feedback regarding CLARiiONs and ESX best practices from EMCWorld 2006. EMC recommends that each meta LUN presented to ESX should not exceed 250GB. Would they tell me why when I ask? Would they heck-as-like! (Brit talk for I-don't-think-so) When the presentation is published I'll send a link. Hey Mr McCreath, did they specify if this was the CX500 or CX700? Did they also provide any data (doubt it). I know you would ask the right questions... I'll use my contacts to find out why this is, but I think this will be the usual case of all conjecture and no data Cheers Steve
One of the reasons we used 1 lun/volume per vm is that we wanted to implement snap shots of the VM. This is done on volume level. You can do it at the file level, by creating redo logs. This c... See more...
One of the reasons we used 1 lun/volume per vm is that we wanted to implement snap shots of the VM. This is done on volume level. You can do it at the file level, by creating redo logs. This can be done while the VM is running. Paulo
We already use vmotion and it works great. One of the reasons we used 1 lun/volume per vm is that we wanted to implement snap shots of the VM. This is done on volume level. But snap shots h... See more...
We already use vmotion and it works great. One of the reasons we used 1 lun/volume per vm is that we wanted to implement snap shots of the VM. This is done on volume level. But snap shots has not worked out to be a good solution for disaster recovery/backup. Thx for the info.
You are right to say that we have a small enviroment, about 30 VM. I thought so... But when u use 1 (or more) large vmfs partitions with your VM on it do you let different ESX servers r... See more...
You are right to say that we have a small enviroment, about 30 VM. I thought so... But when u use 1 (or more) large vmfs partitions with your VM on it do you let different ESX servers run the VM from it at the same time? Precisely. What i mean is when u have multiple esx servers reading and writing to the same vmfs can this not create problems (...) VMFS is a concurrent filesystem, meaning it's designed to be concurrently accessed from several hosts. If your environment is so small, I suggest you create a few big LUNs and present every LUN to every ESX server. This way, you will be able to use VMotion (if, of course, you have Virtual Center). Be sure to read the requisites in terms of switch names. and if that is not the case could you do it also with a ext3 partition on witch all servers can write at the same time? Unfortunately not, as ext3 is not a concurrent filesystem... You would end up with filesystem corruption after a few filesystem write operations. Paulo
Hi, You are right to say that we have a small enviroment, about 30 VM. But when u use 1 (or more) large vmfs partitions with your VM on it do you let different ESX servers run the VM from i... See more...
Hi, You are right to say that we have a small enviroment, about 30 VM. But when u use 1 (or more) large vmfs partitions with your VM on it do you let different ESX servers run the VM from it at the same time? What i mean is when u have multiple esx servers reading and writing to the same vmfs can this not create problems, and if that is not the case could you do it also with a ext3 partition on witch all servers can write at the same time? Cheers,
Just a bit of feedback regarding CLARiiONs and ESX best practices from EMCWorld 2006. EMC recommends that each meta LUN presented to ESX should not exceed 250GB. Would they tell me why when I a... See more...
Just a bit of feedback regarding CLARiiONs and ESX best practices from EMCWorld 2006. EMC recommends that each meta LUN presented to ESX should not exceed 250GB. Would they tell me why when I ask? Would they heck-as-like! (Brit talk for I-don't-think-so) When the presentation is published I'll send a link.
We have a one lun for each vm. I cannot say witch is better but as some people say it is an administrative nightmare i cannot agree, You must have very few VMs. We have a bit over 200 VMs i... See more...
We have a one lun for each vm. I cannot say witch is better but as some people say it is an administrative nightmare i cannot agree, You must have very few VMs. We have a bit over 200 VMs in our infrastructure, among 8 physical servers, on our EVA 5000. The EVA 5000 has 2 controllers, and every server has 2 HBAs. If we had one LUN per VM, we would have 800 paths to the LUNs (every LUN would be visible 4 times: 1 per HBA and 1 per EVA controller). That is well beyond the 128 hard limit on ESX. We have some 256GB LUNs and some 512GB LUNs, and distribute the VMs among them. Lighter machines go to bigger LUNs (more machines per LUN). We don't create and delete LUNs all the time; we set them some time ago, and then added some, but it stays relatively unchanged. Avoid change whenever possible: I never heard of "spontaneous" problems arising in stable machines; however, upgrades and changes tend to be troublesome... Or, as they say: if it works, don't "fix" it. We also give every machine its own .vmdk just for paging, and have LUNs specifically for holding .vmdk with pagefiles. We tend to give machines a bit more memory than strictly needed, so pagefiles are there to give machine owners that "warm fuzzy feeling", as they're seldom used (if ever). Indeed, we're considering putting these .vmdk files in some lower grade storage, and keep the real stuff in the EVA. Another reason why we have dedicated .vmdk for pagefiles is because of backups: pagefiles tend to be placed on the C: drive, but we have a policy of keeping C: as small as possible, so we can keep restore times low. Pagefiles are not backed up, so they need not to be restored. Having different LUNs for different kinds of volumes (one LUN for system drives, another for pagefiles, etc.) also allows optimizations on the storage level - like using vraid1 for databases and vraid5 for system drives and pagefiles. Paulo
Hi, We have a one lun for each vm. I cannot say witch is better but as some people say it is an administrative nightmare i cannot agree, to make a new lun and make it visible to the esx s... See more...
Hi, We have a one lun for each vm. I cannot say witch is better but as some people say it is an administrative nightmare i cannot agree, to make a new lun and make it visible to the esx servers takes about 10 minutes, that is no prob i think. And if a vm is removed you can remove the lun and use the space on you filer for another function. Cheers,
Hahaha.. Yeah, I can see how the confusion could arise, sorry about that. We had a consulting company come in on our initial deployment and this was their recommended setup for ease of use an... See more...
Hahaha.. Yeah, I can see how the confusion could arise, sorry about that. We had a consulting company come in on our initial deployment and this was their recommended setup for ease of use and to help manage disk i/o and LUN usage long term. So far it's worked really well for us
Whew! Thanks for clearing that up...I've run across others who do the same, and in some ways it makes sense (you can tune your LUN to its intended purpose)...
I may have not been clear exactly how our LUNS are used. On any given LUN we may have multiple VM disk files for various different VMs. We just keep everyone's "C:\" volume on one of our de... See more...
I may have not been clear exactly how our LUNS are used. On any given LUN we may have multiple VM disk files for various different VMs. We just keep everyone's "C:\" volume on one of our designated "C" LUNS and everyone's "D:\" volume on one of the designated "D" LUNS and so on. Maybe that clears things up a bit. We DO NOT use a one to one lun -> VM scenario, which as you have pointed out would become 1. A pain to manage and 2. A limiting factor in our deployment. We will have over 300 VMs once our consolidation project is complete.
Paul, While this will work, it won't scale well. ESX has a limit of 128 LUNs, so if you have four LUNs per VM, you've effectively limited yourself to something around 30 VMs. And since you nee... See more...
Paul, While this will work, it won't scale well. ESX has a limit of 128 LUNs, so if you have four LUNs per VM, you've effectively limited yourself to something around 30 VMs. And since you need all the LUNs for all VMs to be visible to all hosts in your farm (if you want to use VMotion...) - that's 30 VMs per farm, not per host. This approach also introduces significant management overhead, which can be a big deal in some shops. If your approach is working for you, that's great! I'd just hate to try to implement it in an environment with hundreds or thousands of VMs. KLC
While we don't have a seperate LUN for every VM we do have a seperate LUN for each Partition. For instance: LUNxx-Prod-C-1 (system) LUNxx-Prod-D-1 (data) LUNxx-Prod-E-1 (logs) LUNxx-Prod-S... See more...
While we don't have a seperate LUN for every VM we do have a seperate LUN for each Partition. For instance: LUNxx-Prod-C-1 (system) LUNxx-Prod-D-1 (data) LUNxx-Prod-E-1 (logs) LUNxx-Prod-S-1 (swap) Our largest LUNs are the ones containing the Data volumes which are at 200GB. It may look a bit strange, but when we implemented ESX we had limited available SAN storage, so this is what we worked out as far as ease of management. We haven't had any issues thus far. Message was edited by: Paul.B
Through trial and testing, I've standardised on LUNs of 128Gb each. I started with a LUN of 512Gb, then quickly broke it into 2x256 but still see locking contention so 128Gb's my sweetspot. Ou... See more...
Through trial and testing, I've standardised on LUNs of 128Gb each. I started with a LUN of 512Gb, then quickly broke it into 2x256 but still see locking contention so 128Gb's my sweetspot. Our VMDKs are between 4-30Gb each.
Now in english. I would recommand to create an extra LUN for every Virtual Server for the Pagefile (swap) Put your flame-proof armor on... I think that's a very bad practice. First, it w... See more...
Now in english. I would recommand to create an extra LUN for every Virtual Server for the Pagefile (swap) Put your flame-proof armor on... I think that's a very bad practice. First, it would be an administrative nightmare to have to manage a separate LUN for each VM - just for a swapfile that will (hopefully) never be used! Keep in mind, also, that you can only present a MAXIMUM of 128 LUNs to an ESX host. This is not typically a problem; but with a dedicate swap LUN per VM, you could quickly exhaust your quota. I realize that - on physical hardware - you often create a separate swap partition. That's fine, and you can do the same thing in a virtual environment, if you want. You can even create a separate .vmdk, if you're so compelled - but DON'T create a dedicated LUN for your swapfile. Take advantage of your virtual environment. Create a VMFS volume or two, drop .vmdk files on it to support 10 - 20 VMs per volume, and relax
That is not a very manageable solution. You want to give your guests enough ram to avoid paging if at all possible. We have fine performance and have never done this kind of thing.
OMG! Do not try and create vmdk's for pagefiles.... What a nightmare...
Why do you recommend a separate LUN for your pagefile ? Surely your then be creating many smaller vmdk's just for the pagefile. Is that not bad disk management ? I can't see any benefits there... See more...
Why do you recommend a separate LUN for your pagefile ? Surely your then be creating many smaller vmdk's just for the pagefile. Is that not bad disk management ? I can't see any benefits there. We have a base build of 8Gb for a W2003 build which includes the o/s, pagefile, and application. Only when additional storage is required, i.e. database, do we add another vmdk to the guest.
Do not create an extra LUN for the Pagefile for every Virtual. Message was edited by: douwe