VMware Cloud Community
MLaskowski012
Contributor
Contributor

Please advise SAN / VMFS extents quastions?

Hey guys,

Thank you in advance for your comments / advice and recommendations.

I'm not a SAN expert and don't really know much on how our SAN is configured. What makes it even harder we have our datacenter and SAN services outsourced; we are using a shared SAN not only for our different environments but also for their different clients. When it comes to my options I can ask for a RAID5 of 272GB LUN's or a RAID10 of 136GB LUN's this is all I know. We have an enterprise class EMC Symmetrix DMX 4 SAN, the way the array is configured the biggest LUN size I can get is 272GB LUN's. I really find this weird because in any mid-range class SAN I can create a 1-2TB LUN if I wanted to but I guess when it comes to the enterprise class DMX it's different. I have designed and build a huge VMware environment and it's still growing. In VMware the only way to make the Datastores bigger is using extents, I always been told to stay away from extents, and I been staying away as much as I can. I currently have about 35TB presented / 130 Datastores of 272GB LUN's plus a couple extents about 5 total datastores that are between 500-800GB. Well we are running low on space and I'm getting another 10TB presented. I know VMware support up to 256 Datastores so I'm still fine but management of storage is a nightmare at this point. We should be on ESX 4.0 by mid September, with 4.0 I know there are some options with thin provisioning, storage vmotion etc.. Storage vmotion will help me clean some of this up and get more organized. I'm getting 10TB and I'm starting to re-think the option of extents. If I get 10TB and make them 500GB-1TB VMFS volumes and storage vmotion stuff to it, delete the current 272GB LUN's and create more 500GB-1TB LUN's with extents etc... Not only this goes against everything I always been told but it's also going to the extreme from no extents to 45TB+ of extents using 500GB-1TB LUN's. I was hoping you can give me some advice. ?:|

Would you do it?

What you think about using extents in VMware?

What are the main disadvantages with doing this?

Have you seen or heard of anybody using extents to this level?

Are you aware of any gotcha's?

When building a datastore with extents does it matter what LUN's I choose to build my volume?

So what's the deal with these enterprise class / DMX SAN's they cost so much more but yet I cannot get a big LUN?

THX,

Mike Laskowski

Tags (5)
Reply
0 Kudos
31 Replies
azn2kew
Champion
Champion

Sounds like you're on the right track to request more bigger LUN size between 400-600GB LUNs would be pretty standard and it really depends types of servers you're loaded on it but between 10-16VMs would be good. There are 256 LUNs limitation and 2TB in size, so if you need large RDM or LUNs for file server, oracle, sql databases, 272GB isn't always enough for these purpose. I wouldn't recommend using extents due to the fact, it doesn't gives you any performance boost but gives you complication when you need to remove extents, you would have to destroy the whole VMFS and might experience data corruption. I don't care if its DMX or Clariion, you should be able to request larger size LUNs with different level of RAID you're expecting especially this is an outsource storage. My recommendation, since this is a huge VMware infrastructure, you should have your own SAN team and be more reliable and flexible for administration and management.

Storage VMotion will definitely gives you the opportunity to clean up this act when you need to migrate LUNs/VMDKs across without downtime. I do not use extents due to above caveats, but sometimes its fine with smaller LUN size less than 200GB. and VMFS best practice guide http://communities.vmware.com/docs/DOC-9276 for more details.

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!

Regards,

Stefan Nguyen

VMware vExpert 2009

iGeek Systems Inc.

VMware, Citrix, Microsoft Consultant

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!! Regards, Stefan Nguyen VMware vExpert 2009 iGeek Systems Inc. VMware vExpert, VCP 3 & 4, VSP, VTSP, CCA, CCEA, CCNA, MCSA, EMCSE, EMCISA
MLaskowski012
Contributor
Contributor

I have asked for bigger LUN's tons of times and they say it's not possible with the way the array is setup. Actually if there are any SAN / DMX experts here what would make that not possible? So if I want a bigger LUN's extents would be my only option right now? But I'm really not sure if extents are a good option.

Reply
0 Kudos
azn2kew
Champion
Champion

Unfortunately, the maximum LUN is 240GB for the DMX4 SAN so the best solution is request to migrate to different types of SAN they have available to support larger LUN size or continue using extents and cross your fingers!

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!

Regards,

Stefan Nguyen

VMware vExpert 2009

iGeek Systems Inc.

VMware, Citrix, Microsoft Consultant

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!! Regards, Stefan Nguyen VMware vExpert 2009 iGeek Systems Inc. VMware vExpert, VCP 3 & 4, VSP, VTSP, CCA, CCEA, CCNA, MCSA, EMCSE, EMCISA
malaysiavm
Expert
Expert

I went through the same experience previously when the deployment include the EMC DMX storage as part of the scope. According to the SAN engineer, that is the best practices and calculation recommend by the EMC guys base on the raid group and spindle count. You may want them to explain to you in detail if you would like to know more. I am not expert in DMX series of EMC, but in Clariion, we do provision bigger LUN size as 300GB to 500GB and we do not run in to the performance issue at the moment.

Craig

vExpert 2009

Malaysia VMware Communities -

Craig vExpert 2009 & 2010 Netapp NCIE, NCDA 8.0.1 Malaysia VMware Communities - http://www.malaysiavm.com
Reply
0 Kudos
azn2kew
Champion
Champion

We're using mostly EMC Clariion CX500, CX300, and CX380 which has no problems to present larger LUNs for our purpose, so you might have to stick with the limitation but want to propose the problems with using extents so they can migrate your virtual infrastruture to different storage arrays/systems to accomodate you since you pay for the services. CUSTOMER IS THE KING

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!

Regards,

Stefan Nguyen

VMware vExpert 2009

iGeek Systems Inc.

VMware, Citrix, Microsoft Consultant

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!! Regards, Stefan Nguyen VMware vExpert 2009 iGeek Systems Inc. VMware vExpert, VCP 3 & 4, VSP, VTSP, CCA, CCEA, CCNA, MCSA, EMCSE, EMCISA
Reply
0 Kudos
depping
Leadership
Leadership

first I would like to say read Chad's post on this topic he's an EMC

employee and a storage/vmware guru:

http://virtualgeek.typepad.com/virtual_geek/2009/03/vmfs-best-practices-and-counter-fud.html

I would use extents if you can't increase the LUN. Anyway, I hardly ever see someone hitting the 256 limit because normally one would present LUNs to hosts in a cluster and not cluster. How did you set this up? How many hosts / clusters are we talking about and can they all access these LUNs?

Duncan

VMware Communities User Moderator | VCP | VCDX

-


Blogging: http://www.yellow-bricks.com

Twitter: (*NEW*)

If you find this information useful, please award points for "correct" or "helpful".

Reply
0 Kudos
MLaskowski012
Contributor
Contributor

THX for Chad's post, Interesting take on extents.. As for me I have 2 Clusters a DEV and a PROD each cluster has over 300+ VM's. Total of 20 ESX servers running on HP blades. 10 servers in each cluster. Every server can see the same disk. DEV and PROD VM's are on the same network etc.. So we decided to present the same disk to both clusters. We figured this will also give us the flexibility of moving ESX servers from different clusters in an event of any issue or if production needed more resources. When that decision was made I would never think we will get to over 45TB of disk between the clusters and have so many 272GB LUN's

Reply
0 Kudos
sakacc
Enthusiast
Enthusiast

Mike, Duncan - thanks for the question!

To be clear - the Symmetrix can absolutely be configured for large multi TB LUNs. What often happens is that the storage team has a "standard" way of configuring LUNs - and one operational model across mainframe and open systems.

This operational model is common in "Enterprise" Storage environments (and therefore shows up with Symmetrix), and less common in "Midrange" environments (that are usually opensystems only).

By open systems, Windows and VMware ESX are two examples.

So... in mainframe environments, generally the storage model are smaller LUNs, that use a LVM to stripe across the LUNs for more parallelism (which is often the source of these "small LUN" default configs from the SAN team).

Now - what does that architecturally look like? Multi-extent VMFS-3 configurations, of course!

There's not the downside that people normally think of ("no performance benefit" = FALSE; "higher risk" = MOSTLY FALSE). The "MOSTLY FALSE" on "higher risk" is basic. There is no higher risk for a "LUN being removed" than in non-multi-extent configurations in the sense that:

  1. if you lose the first extent: i) in a single-extent datastores, you lose the datastore - some possibility of VM corruption; ii) in a multi-extent datastore, you lose the datastore - some possibility of VM corruption = SAME.

  2. if you lose something other than the first extent: i) in a single-extent datastore, there is only one, ergo you will always lose the datastore, so same as 1; ii) in a multi-extent datastore, the vms on the lost extent will disappear - with some possibility of VM corruption = SAME

  3. The only difference - it is possible to have a VM that spans two extents, and it would also disappear if half it's extent was lost. (this would only apply POSSIBLY to one more VM per extent (in otherwords, one, not several could cross an extent boundary).

In all cases - the possibility of VM corruption is the same, equivalent to an ESX server failing (a hard crash). VMware doesn't guarantee crash consistency, but is pretty darn good.

Net/net? It's unlikely you'll be able to get your SAN team to change their standard. At EMC we are working agressively to integrate VMware/storage mgmt - with the long term goal of making the storage completely invisble (you just create the VMs, and the storage auto-configures).... so they better watch out - they better become more flexible, or they might become obsolete!

I would strongly consider using multiple extent VMFS configurations in your case. Also push your SAN team to use Virtual Provisioning - it's in Enginuity so if they are even CLOSE to current on their array software revs, they have it. If they don't tell them to GET WITH IT, and then just start using vSphere Thin provisioning Smiley Happy

My 2 cents!

Chad

Reply
0 Kudos
MLaskowski012
Contributor
Contributor

Man this is some great info. So one more quastion, if I do go with extents 45TB total of 600GB-1TB in size.

When building a datastore with extents does it matter what LUN's I choose to build my volume?? Should I follow some kind of order? see attached picture I got device ##'s and LUN ##'s??

I guess what I'm asking is there a best practice for creating extents. As it seems that everything you read about extents the best practice is not to use extents Smiley Wink

Reply
0 Kudos
azn2kew
Champion
Champion

I just had this question during the interview today and I asked one of the EMC storage expert and mentioned the limitation of 272GB limitation on DMX 4, and he confirmed its not true but DMX 4 you have to configure something special to have larger LUN and I'm glad to view the post confirmed from Chad above. So, its possible to make larger LUN but Chad has suggested to use extents in your case which can't be wrong due to his role with EMC! Remember you have max 64TB with 32 possible extents with 2TB limitation.

According to the guide on page #16, when adding extents make sure you power off your VMs residing on the same datastore and make so to do rescan after added extents so that other hosts will not accidently use that extents and lead to data loss and just do it from VMFS volume host by host.

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!

Regards,

Stefan Nguyen

VMware vExpert 2009

iGeek Systems Inc.

VMware, Citrix, Microsoft Consultant

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!! Regards, Stefan Nguyen VMware vExpert 2009 iGeek Systems Inc. VMware vExpert, VCP 3 & 4, VSP, VTSP, CCA, CCEA, CCNA, MCSA, EMCSE, EMCISA
Reply
0 Kudos
mreferre
Champion
Champion

Chad,

I think most people (including me) are worried about extents simply because they are another layer of functionality that adds up on the points of failures list. If you don't use it it can't fail. If you use it it could fail. Certainly VMware has made a giant leap making extents reliable enough and able to survive a single LUN failure (originally any LUN failure in an extent would have brought down the entire extent if I remember well). As per the VMs crossing LUNs boundaries this will happen for sure.... otherwise why using an extent vs many individual LUNs. Having this said I agree with your analysis... it's always a compromise in the end.

BTW do you have a good doc that describes why on these high-end boxes (and this is a cross-industry issue I would say since I understand this is the same on IBM high-end boxes...) the LUN configs are so rigid? I remember working on a VMware project years ago where the Storage team at the customer could only give us 10GB LUNs because of how the DS8000 was configured. UH?!? I have more hands-on experience on IBM midrange boxes and my feeling is that they are more flexible than high-end boxes when it comes to retail LUN space.

I guess the ultimate solution is something like XIV (or the EMC V-MAX?) where the physical layout is completely virtualized hence 100% flexibility in curving out the LUN that I need.

Thanks. Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
Reply
0 Kudos
sakacc
Enthusiast
Enthusiast

Mike - you can have a maximum of 32 extents, and a maximum VMFS-3 volume size is 16TB (nomenclature: extent = partition - and is always one LUN; volume = one or more extents; filesystem = datastore, on a volume).

In practice - stay lower.

How much lower? Start by thinking about how many VMs per datastore you would like. This is a function of "eggs in one basket" vs. "lower number of datastores".

People get REALLY hung up on this, incorrectly.

Think of it this way - if you have 64 VMs per datastore (not uncommon in VMware View configurations, and how we've done it in the VMware/EMC reference architectures around View and View Composer), if you have 12 datastores, you're talking about 768 VMs per cluster, which is starting to approach "funky limits" of vCenter. Ergo, can you do datastores with hundreds of VMs - sure, but in practice, there's some - but limited upside. LIkewise, often people worry about managing "many datastores" - in the scenario above, we're talking about 12 datastores for an ESX cluster - not a big deal.

So, let's say you settle in on 30 VMs per datastore. If your average VM is 50GB in size, that means that you minimally need 1.5TB. add 25% for vswap, ESX snapshots and other functions. Can those other things conusme more than 25%? YES - but in practice, they are difficult to plan for, and it's easier to make some estimates and move on - remember, you can always svmotion yourself out of trouble, and the new managed datastore options and Storage View Reports in vCenter 4 can help alert and respond as needed. So, you have a datastore now that is 1.5TB + 375GB (25% of 1.5TB).

Then, you look at the storage design to support that. If you control the SAN (which you don't), you can create a single LUN (on most arrays this can be virtually provisioned, expanded, and meta-object linked together in a variety of ways). If you DON'T control the SAN, use multiple extents to get you where you need (in your case 8). The key is that since your SAN is a black box to you (you don't know how those LUNs are laid out), it's possible to "get 8 extents" that are all really on top of each other (as opposed across a wide set of spindles).

If your SAN team doesn't support thin provisioning, I would use the new thin provisioning functions in vSphere (existed to some degree in ESX 3.x, but the key was that thin (thin or zeroedthick) virtual disks always became thick (eagerzeroedthick) when you deployed from a template or cloned a VM (BTW, we're pushing VMware to fix this behavior in ESX 3.5, but we'll see how that goes)

Again - my suggestion - don't overthink. Instead, move forward, monitor, and know how to react. People are so used to the physical configs, where an error or mis-estimation can be fatal. In VMware land, you can respond non-disruptively under many, many (almost all) cases, so the real penalty is in either moving slowly, or over-sizing in fear.

So - take 8 LUNs, create a multi-extent VMFS, and move on. Monitor QUED (queue depth) on the LUNs using ESXtop periodically, and the vSCSI stats on an ongoing basis (easy to automate), and you'll see when QUED starts to get up to high values (16-32) for each LUN backing the multi-extent VMFS. That's your queue to start creating VMs in another datastore. If you see that happening very fast, ask the storage team to look at the "heat map" for those LUNs (which spindles are hot - serving lots of IO, and which are "cold" - serving few). chances are that the LUNs supporting the extents are all on top of one another, so very few actual spindles are being used. The SAN team has lots of tools to fix that non-disruptively.

This is why the whole storage world is moving to "wide striped" (create LUNs across loads of spindles), and "virutally provisioned" (only provision blocks actually used) configurations, and most arrays (certainly all EMC arrays) support this as an option.

As is SO often the case - the technology is moving faster than the people/organization/process.

Hope this helps! Covered this at length in the storage chapter in the upcoming Mastering vSphere 4 book by Scott Lowe (I wrote the storage chapter).

Reply
0 Kudos
sakacc
Enthusiast
Enthusiast

Massimo, I hear you, but it's not entirely logical.

"Striped/plaided volumes" where the loss of an extent/partition affects the whole volume - sure, then the use of multiple extents increases risk - in that case fear of the "extra layer" seems to me to be totally rational.

But, that's not the case here. The risk is the same in a multiextent config as it is in a single extent config with VMFS-3. Most of the fear/uncertainty is rooted in the way VMFS-2 operated, which was the bad case above.

Now, there is one downside tp multi-extent datastores. The performance of the datastore can be non-contiguous, which can make resolving performance issues (without good SAN performance tools) harder. It's for these reasons we made the EMC storage viewer (a free vCenter plugin) that correlates VMware datastores to LUNs to the actual backend storage objects. It's still early days, but coupled with making the storage element managers themselves (the tools the storage teams use to manage the SAN/NAS platforms) log into vCenter for correlation in the opposite direction - these are baby steps down the path we're trying to follow - ultimately to make the storage in VMware environments invisible.

So - one important best practice - make sure that all the backing LUNs have similar characteristics performance wise. Ergo, don't put one on RAID6 SATA and one on RAID 10 solid state Smiley Happy

Now, onto the second point....

Re rigidity, it's less about "high-end" vs. "mid-range" than what the platforms are expected to support. The EMC Symmetrix and the IBM DS8000 are deployed at most customers in configurations that support ALL sorts of hosts, all sorts of applications. In some cases, enterprises consolidate almost all their storage onto one massive array. More often, mid-range arrays are deployed for given applications. This isn't intrinsic (some customers use mid-range arrays for many apps), but at a minimum mid-range arrays don't support non-open systems (for example, you can't connect mainframes to EMC CLARiiON/Celerra, NetApp, all the HP arrays, or the IBM/LSI DS5000 stuff).

From that develops some legacy product requirements. Two examples: 1) the track-by-track very low-level provisioning models are born from that as that's what mainframes expect; 2) the idea of array port emulation (SPC-2/SCSI type) doesn't really exist when you only need to support open systems.

But more importantly, when you are supporting an entire enterprise as a team using a single platform - you can make operations more reliable by provisioning "do you want a small/medium/large, and do you want fries with that". it's a LOT more rigid, and can be taken to bad extremes (where you only have one configuration for open and non-open systems), but less error prone for the humans.

Ironically, as VMware is now being pressed into "uber consolidation" use cases - I see provisioning models emerging that are not dissimilar (only allow VMs of fixed "small/medium/large" configurations).

I can't speak for the DS8000 (or XIV), as I haven't personally used one, but I can say that these rigid provisioning models are NOT intrinsic to the EMC Symmetrix from a product standpoint. There is a nice, simple GUI, you can thin provision out the wazoo (we just made this feature free on the Symm), there are simple templates and "auto-provisioning groups" (ie you can automate the whole process of provisioning storage via templates and then apply the template to a whole cluster of hosts).

Interestingly, though XIV is very interesting architecturally and represents a new architecural model (most in common with Dell/EqualLogic), it doesn't currently (I believe) support mainframes, or have the "shared memory model" that are two of the defining characteristics of a "high-end/enterprise" array.

Frankly, I think at the high-end, on the EMC side we're pushing this very far, very fast on the products themselves, but in some ways are victims of past success - people have built people+process around the old way of doing it - they don't leverage the new things the platforms themselves can do.

Conversely, in the mid-range, since often these are "managed by the application teams" (or people closer to the application teams) - new features/functions and fluid provisioning processes are used more quickly.

That's my take at least....

Hope to see you at VMworld!

Reply
0 Kudos
mreferre
Champion
Champion

Chad,

thanks.

Starting from the bottom... I won't be in SF this year. Perhaps we will have another interesting chat in Cannes next year... Smiley Wink

Re the extent thing... good point the potential perf issue. On top of the risk there is also (in my opinion) the management overhead: Sys Admin have to deal with multiple small LUNs "extented" Vs a single bigger LUN. Not the end of the world but for setting up DR scenarios and for troubleshooting I'd rather keep my setup as simple as possible. Again ... if this is the only option this remains a phylosophical discussion.

Re the High-End (in)flexibility good points. I guess I was more curious about the technical limitations of not being able to expose 1 x 2TB LUN if you have enough free space available to expose 10 x 200GB LUNs (not to mention 100 x 20GB LUNs!). I guess this is something would be challenging to figure out on a forum... Smiley Happy

Thanks. Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
Reply
0 Kudos
MLaskowski012
Contributor
Contributor

Chad,

Thank you!!! You're the MAN! I will have a talk with the SAN guys and probably share some of this info with then, now that I know a little more about the DMX architecture. I'm pasting in an email below that a good friend Bogdan emailed me that also works for EMC and is a VMware guru as a FYI, as I like to keep these forums / blogs as reference points, and he provided some good insight as well.

Hey Mike,

Sounds like your stuck between a rock and a hard place. Let me give you a real quick unorthodox/unofficial crash course in DMX architecture.

When a DMX is carved out, slices of disks are created. We call these slices, hypers. On a DMX4, the largest hyper you can create is a 60GB. When you apply a RAID protection scheme to a hyper, its then called a standard device, or STD for short.

The next step is to take your STDs, and assemble them with other STDs. This creates a Metavolume, or meta for short. The META is presented to you and you call it a LUN.

To better understand/appreciate your dilemna, you need to figure out a couple of things for each LUN presented:

1. How big are the hypers?

2. What physical disks contain your hypers, and more importantly, what else is on that disk and how big is that physical disk.

3. What specific kind of RAID protection is being used. Is it RAID5 (71) or RAID5 (51)?

4. How many STDs make up your META?

Once you have this information, you have to work with your storage provider to see if there's anything that can be moved around temporarily so that you can create larger hypers or larger METAs. They can probably handle creating larger METAs. Modifying hypers will most likely require a BIN file change on the DMX, and EMC has to do that for you.

I can send you a pretty neat EMC whitepaper that talks more about using Symmetrix DMXes and VMware. There's even a section on the nondisruptive expansion of metavolumes that happens at the DMX level. The PDF file is kind of big, weighing in at 25MB so I can't email it. But I can probably FTP it or you can ask your EMC account team for it. One word of caution with this PDF file...It contains some SYMCLI commands. Do not run any SYMCLI commands without the express permission of your storage admin. Doing so will likely irritate them and you may find yourself in an unpleasant situation.

Ultimately, any action required is going to need the support and cooperation of your outsourcing storage partner. If you work with them (nicely), and show them that you're not just demanding more space, but really want to understand your disk layout, maybe they might be more open to fufilling your request.

Symmetrix architecture (DMX or VMAX) is very resilient to failure and takes performance to the highest level. It's a little complicated, but it's the best out there.

Reply
0 Kudos
mreferre
Champion
Champion

>I guess I was more curious about the technical limitations of not being able to expose 1 x 2TB LUN if you have enough free space available to expose 10 x 200GB LUNs (not to mention 100 x 20GB LUNs!). I guess this is something

>would be challenging to figure out on a forum...

Mike, I think your post somewhat answers my curiosity ... Smiley Happy

Thanks.

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
Reply
0 Kudos
MLaskowski012
Contributor
Contributor

And here is another e-mail from my VMware TAM just as a FYI, but I think Chad took care of this theory

Here is my 2 cents and experience with extents.

I never recommend extents due to the fact that each actual VMDK is spread across all of those extents. If one LUN goes down the entire VMDK goes down and has a HUGE possibility of getting corrupted. If you explode that view to 30+ VMDK's that outage would potentially cost someone their job I would imagine.

I don't believe vSphere's VMFS - (Thin Provisioning) would help here because you need a big LUN to start with, and in this case it would still be 272GB - Andy correct me if I am wrong.

I would try and have more conversations with the storage team and work on something to carve out maybe 500GB LUNs or more. I really don't think it's a DMX issue, I think it's the outsourcer restrictions. Also keep in mind of the rule of 10-15% free space per LUN for snapshots and other misc stuff.

The best part of SVMotion is that when you start running out of space you can move VMDK's with zero downtime.. I would recommend you continue with that.

Hope that helps.

Reply
0 Kudos
mreferre
Champion
Champion

I never recommend extents due to the fact that each actual VMDK is spread across all of those extents

I am not sure every VMDK is spread across all extents. Otherwise what Chad was saying wouldn't stand. The way I understand extents (from 3.x on) is that they present a single virtual VMFS volume but they are used "sequentially". So the first VM will use the first LUN, the second VM idem etc etc .. to the point where a VM is supposed to have the VMDK on both the first and second LUN (well I think - I hope at least - the algorythm is a bit smarter but you've got the idea). That last VM is at risk if either LUN1 or LUN2 fails. This is the additional risk of using extents.

My 2 cents.

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
Reply
0 Kudos
azn2kew
Champion
Champion

Mike,

Can you ask your EMC guru where can I download the guides from Powerlink site? I have access but can't find the links and this has been a great discussion especially newbie with DMX knowledge. Keep it going Mike! don't close it yet.

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!

Regards,

Stefan Nguyen

VMware vExpert 2009

iGeek Systems Inc.

VMware, Citrix, Microsoft Consultant

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!! Regards, Stefan Nguyen VMware vExpert 2009 iGeek Systems Inc. VMware vExpert, VCP 3 & 4, VSP, VTSP, CCA, CCEA, CCNA, MCSA, EMCSE, EMCISA
Reply
0 Kudos