All Posts

Paulo, what you are saying makes sense. However your setup makes me think of a mainframe thing called parallel sysplex .... which is a shared data tightly coupled type of cluster. Now I know ... See more...
Paulo, what you are saying makes sense. However your setup makes me think of a mainframe thing called parallel sysplex .... which is a shared data tightly coupled type of cluster. Now I know for sure than even in this high-end architectures there might be problems that might eventually force you to bring all nodes off line because of this pervasive points of contentions in this tightly coupled cluster. If these things happen (rarley fortunately) I guess that they might be happening (certainly less rarely) on this x86 platform. I can't detail the exact situations where things might go wrong (perhaps a "bastard lock" or things like that) but whenever I hear about all ESX nodes bundled in a tight cluster layout I tend to feel uncomfortable..... Massimo.
Hi, This does scale for larger customers obviously. I have some customer with >400 VM's. In those instances I have reworked the numbers while still maintaining the same methodology of laying out ... See more...
Hi, This does scale for larger customers obviously. I have some customer with >400 VM's. In those instances I have reworked the numbers while still maintaining the same methodology of laying out the VMFS volumes. You can just scale the LUN's to be larger. Again though, you need to take into account tieiring of the servers at a certain point. Brad
Your detailed description made me understand a very important variable: the number of servers in the same farm. We have 8 HP DL585, and they're all working together, seeing the same LUNs. As thes... See more...
Your detailed description made me understand a very important variable: the number of servers in the same farm. We have 8 HP DL585, and they're all working together, seeing the same LUNs. As these servers will, eventually, accommodate about 500 VMs (they're currently running a bit over 200 VMs and their occupation is about 40% - this is under ESX 2.5) we would have an awful lot of LUNs to deal with if we broke things down as you do. In the other hand, your way of doing things is very nice for smaller environments. Sometimes, scalability is not an issue. In our case, we could do two things: either break the servers in 2 farms of 4 servers each, or have bigger LUNs. We're going for the big LUNs (about 1TB each); others may be more comfortable with more but smaller LUNs. In the end, it's more a matter of personal choice, as I guess performance will not be much different. Paulo
Hi Guys, I felt like I need to add my 2cents. When I work with my customers I follow a set of guidelines that they can tweak. I mainly focus on EMC hardware but also deal with Netapps, HP and... See more...
Hi Guys, I felt like I need to add my 2cents. When I work with my customers I follow a set of guidelines that they can tweak. I mainly focus on EMC hardware but also deal with Netapps, HP and IBM. This is my best practice / thought process: I create a filesystem with the following naming conventions vmfsos01 - For the C: drives vmfsdata01 - For drives or data volumes smaller then 100G's vmfspage01 - For Pagefiles or P: drives vmfsother01 - For Templates or anything else that is needed When the admin needs more space they just increment the number to something like vmfsos02 This naming convention helps the admins to very easily and quickly add vmdk files in VC without searching for workbooks. Now for the sizing. Based on VMware whitepapers, EMC dos and my own testing I have set my number of VM C: per vmfs volume at 10 per LUN. So to size my vmfsos01 I have used a standard C: size of 20GB. So 20GB per C: x 10 VM's = 200GB Now I need to account for snap space and swap space in esx3.0. So I add 50 Gigs for snap space and 2 gigs per VM for swap. The final calculation is: (20GB per C: x 10 VM's) + (2GB for swap x 10 VM's) + 50GB for snap = 270 GB There is obviously some leway but I like to make ALL of my vmfsos0x volumes the same size. I make the vmfsdata0x volumes anywhere from 300 - 500 GB's. this is because if there is a VM that needs more then 50 - 100GB for data then they will probably want a physical LUN / RDM. For the vmfspage01 I make it 2GB x 10VM's = 20GB + 10GB for snap = 30GB So in the end MY Best Practices I teach my customers is vmfsos01 - 270GB For the C: drives vmfsdata01 - 300GB For drives or data volumes smaller then 100G's vmfspage01 - 30GB For Pagefiles or P: drives vmfsother01 - 500GB For Templates or anything else that is needed Some other reasons for this best practice is based on replication, DR and backups. Lesser number of VM's on the VM the easier replication will be and the more granualar you can set it to be. Also you can tier your backups and replications schemes for each VMFS so having a smaller number of VM's is more beneficial. The last thing to note is IF any of my custimers wants to do replication ath the VMFS level then I change my nameing scheme sometimes to inlcude the replication type they are using. i.e. vmfsosMV01 - Added MV for Mirroview vmfsdataSC01 - Added SC for Sancopy vmfspageNO01 - Added NO for none I hope this helps. If you have any questions just PM me. Thanks, Brad
Thanks for your reply paulo. I guess our intial thinking was from a performance aspect. Vmdk files would have slightly more overhead then running disks native. The VMFS filesystem is highly op... See more...
Thanks for your reply paulo. I guess our intial thinking was from a performance aspect. Vmdk files would have slightly more overhead then running disks native. The VMFS filesystem is highly optimized performance-wise. The performance hit is negligible. One other question. Do these large vmdk files sit on one LUN? If one is supposed to only make the LUN a certain size for ESX, then it would seem you'd have to create one LUN per vmdk?? I understand your concern, but we intend to put those 2 .vmdk files in a big LUN (at least 1TB, maybe more) along with other .vmdk files. We have carved our EVA 5000 with 256GB and 512GB LUNs for VMFS, but we now intend to scrap these and create 1TB LUNs instead - of course, carefully migrating the .vmdk from the old LUNs to the new ones. I have yet to see a machine that definitely needs its own LUN for performance reasons. Even then, I'd have it share a VMFS volume with other less hammered .vmdk files. Paulo
Thanks for your reply paulo. I guess our intial thinking was from a performance aspect. Vmdk files would have slightly more overhead then running disks native. One other question. Do these lar... See more...
Thanks for your reply paulo. I guess our intial thinking was from a performance aspect. Vmdk files would have slightly more overhead then running disks native. One other question. Do these large vmdk files sit on one LUN? If one is supposed to only make the LUN a certain size for ESX, then it would seem you'd have to create one LUN per vmdk??
All our VMDK files are for guest "C:\" drive. For any server that needs a "D:\" drive, we create a LUN and attach it to the VM as a System LUN/Disk. There's a limit on the number of LUNs that ... See more...
All our VMDK files are for guest "C:\" drive. For any server that needs a "D:\" drive, we create a LUN and attach it to the VM as a System LUN/Disk. There's a limit on the number of LUNs that can be visible by an ESX box, so this is not a scalable strategy. Are we limited in the way we have set this up? I'd like to know that if I wanted to, I could go from 30 VMs to 300. No, you won't be able to scale that far with one extra LUN per VM. Do most people run their "D:\" drives in a VMDK file? Can't speak for others, but we only use RDMs (Raw Device Mappings) when strictly necessary - which means "clusters", which, by the way, we avoid like the plague. We end up using .vmdk files for pretty much everything. Our file server has a 240 Gig LUN attached to the VM as a System LUN/Disk. Should I be running a 240 Gig VMDK file instead? We have one machine whose data disks are one 500GB and one 300GB .vmdk files... Paulo
whel, the point is all vm's on the vmfs2 LUN should be turned off while upgrading to vmfs3.
This thread worries and confuses me. We are a somewhat small shop with 30 VMs. We have 3 VMFS LUNS that we use for the VMDK files. All our VMDK files are for guest "C:\" drive. For any server tha... See more...
This thread worries and confuses me. We are a somewhat small shop with 30 VMs. We have 3 VMFS LUNS that we use for the VMDK files. All our VMDK files are for guest "C:\" drive. For any server that needs a "D:\" drive, we create a LUN and attach it to the VM as a System LUN/Disk. I'm in the process of upgrading to VI3, Virtual Center, Vmotion... Are we limited in the way we have set this up? I'd like to know that if I wanted to, I could go from 30 VMs to 300. Do most people run their "D:\" drives in a VMDK file? Our file server has a 240 Gig LUN attached to the VM as a System LUN/Disk. Should I be running a 240 Gig VMDK file instead?
@Paul: How are you going to upgrade yar environment to VI-3 while yar VM-files are distributed accross multiple LUN's ? I am interrested in yar migration scenario.
Both ESX servers need to be able to see the VMFS volume which the vmdk file for the VM to be vmotioned resides. Until you decide to move it, the vmdk file stays on the VMFS volume you created ... See more...
Both ESX servers need to be able to see the VMFS volume which the vmdk file for the VM to be vmotioned resides. Until you decide to move it, the vmdk file stays on the VMFS volume you created it on. It requires a power off of the VM and a copy (or migrate) to move the VMDK to another datastore (VMFS volume). You can migrate a running VM between ESX servers who use the same datastore, or shut down the VM to move the VMDK between datastores. Hope this helps. Chris
Beautiful- so the issue is not which VMFS volume a VM is on, but whether both ESX hosts in the VMotion move can see both VMFS volumes.... Thanks!
Regardless of where VMA is running, the VMDK file will still reside on VMFS A when you vMotion. VMFS A would have to be presented to both ESX servers that you intend to vMotion between. An ES... See more...
Regardless of where VMA is running, the VMDK file will still reside on VMFS A when you vMotion. VMFS A would have to be presented to both ESX servers that you intend to vMotion between. An ESX server can read and write to multiple VMFS volumes at the same time. Multiple VMFS volumes may be presented to multiple ESX servers at the same time.
Question- If I have (for example) two 250GB LUN's that each have a VMFS volume, if VMA is running on VMFS A, can I use VMotion to move it to a host accessing VMFS B?
The 5080 CPU is Woodcrest correct? If so the Woodcrest CPU is not supported with 3.0 Woodcrest is 51xx, that is why i have select the 50xx to go the safe way Also what are your plans fo... See more...
The 5080 CPU is Woodcrest correct? If so the Woodcrest CPU is not supported with 3.0 Woodcrest is 51xx, that is why i have select the 50xx to go the safe way Also what are your plans for this server? You have over 3 TB of disk attached to this host so I am assuming you are wanting to run alot of VMs on the server. Seems to be alot of eggs in one DELL basket. Yes that is alot of space right now, depanding how we select to setup the raid...... but looking into the companys 1-3 year plan we can see it all used, this is allow the first step on the way for us to have 3*ESX servers with one SAN to run the whole datacenter. I know it looks big, but that do most new servers.....
The 5080 CPU is Woodcrest correct? If so the Woodcrest CPU is not supported with 3.0 Also what are your plans for this server? You have over 3 TB of disk attached to this host so I am assuming... See more...
The 5080 CPU is Woodcrest correct? If so the Woodcrest CPU is not supported with 3.0 Also what are your plans for this server? You have over 3 TB of disk attached to this host so I am assuming you are wanting to run alot of VMs on the server. Seems to be alot of eggs in one DELL basket.
I have just ordre a Dell PE2950 + MD1000 and Dell tells me that is supported.. My setup is as: Dell PE2950 \- 2 x Xeon 5080 \- 8GB Ram \- Prec5I \- 5 * 146GB 15Krpm SAS disk \- 2 * Dua... See more...
I have just ordre a Dell PE2950 + MD1000 and Dell tells me that is supported.. My setup is as: Dell PE2950 \- 2 x Xeon 5080 \- 8GB Ram \- Prec5I \- 5 * 146GB 15Krpm SAS disk \- 2 * Dual port Intel 1gbit netcard with PCIe Dell MD1000 \- 1 EMM module \- Prec5E With PCIe \- 9 x 300GB 10Krpm SAS disk So i hope it works
Both devices should work. I have tested in with the PV220s and i know the perc5 controller (required for the MD1000) is supported.
Yes,but don't expect to use it for Vmotion. You will need SAN,NFS or ISCSI with ESX 3.0 for that.
If it is a simple direct attach SCSI enclosure, then yes, it is supported (as long as it's connected to a supported controller)