VMware Cloud Community
steveyoung
Contributor
Contributor
Jump to solution

Need help understanding DOM Objects on VSAN host MDs

I've read the info I could find on disk objects (incl the best practices guide & Cormac's blog), but can't manage to wrap my head around why I'm seeing what I'm seeing.

I have a powered on VM (cvf1-wshark) that lives on a 3-node VSAN cluster. Each node has 1 SSD and 4 MD. I'm using all default policies (FTT, stripe width, etc.).

Given this, I'm wondering why it appears from the output of the RVC command vsan.disk_object_info that VM objects are living on all 3 of my VSAN nodes. vSphere client shows the correct RAID0 mirroring between 2 hosts (13.21 & 13.23) plus a witness (13.22), but the DOM Objects on 13.22 are not the witnesses (which I wouldn't expect to see an Objects anyway). So what are these objects doing on 13.22? I see the same distribution for each of my VMs on this cluster.

Thanks

Steve

Screen Shot 2014-04-15 at 11.07.48 AM.png

Screen Shot 2014-04-15 at 11.08.03 AM.png

################

13.21

################

/localhost/VSAN-DC/computers/VSAN Cluster> vsan.disk_object_info . naa.600605b006f7fb701ab6548453137d63

2014-04-15 14:44:07 +0000: Fetching VSAN disk info from 192.168.13.21 (may take a moment) ...

2014-04-15 14:44:07 +0000: Fetching VSAN disk info from 192.168.13.23 (may take a moment) ...

2014-04-15 14:44:07 +0000: Fetching VSAN disk info from 192.168.13.22 (may take a moment) ...

2014-04-15 14:44:08 +0000: Done fetching VSAN disk infos

Physical disk naa.600605b006f7fb701ab6548453137d63 (52098356-69da-4ab9-b03f-11eea6025370):

  DOM Object: 26533353-5452-fa2f-b0f6-0025b5c2622c (owner: 192.168.13.23, policy: hostFailuresToTolerate = 1)

    Context: Part of VM cvf1-wshark: Namespace directory

    Witness: 27533353-c48b-63b3-90bb-0025b5c2622c (state: ACTIVE (5), host: 192.168.13.22, md: naa.600605b006f7dae01ab65f7317f942d1, ssd: naa.600605b006f7dae01abde7e20e633fb2)

    RAID_1

      Component: 27533353-ba62-62b3-d5c4-0025b5c2622c (state: ACTIVE (5), host: 192.168.13.21, md: **naa.600605b006f7fb701ab6548453137d63**, ssd: naa.600605b006f7fb701abde07e0de6c455)

      Component: 27533353-eeb5-60b3-7ddc-0025b5c2622c (state: ACTIVE (5), host: 192.168.13.23, md: naa.600605b006f7f9701ab604452752b6b4, ssd: naa.600605b006f7f9701abe00d3102940d7)

  DOM Object: 29533353-76d4-69d0-5601-0025b5c2622c (owner: 192.168.13.23, policy: hostFailuresToTolerate = 1)

    Context: Part of VM cvf1-wshark: Disk: [vsanDatastore] 26533353-5452-fa2f-b0f6-0025b5c2622c/cvf1-wshark.vmdk

    Witness: 29533353-0247-fce9-c547-0025b5c2622c (state: ACTIVE (5), host: 192.168.13.22, md: naa.600605b006f7dae01ab65f7317f942d1, ssd: naa.600605b006f7dae01abde7e20e633fb2)

    RAID_1

      Component: 29533353-829d-fbe9-22fc-0025b5c2622c (state: ACTIVE (5), host: 192.168.13.21, md: **naa.600605b006f7fb701ab6548453137d63**, ssd: naa.600605b006f7fb701abde07e0de6c455)

      Component: 29533353-eebd-fae9-7ec5-0025b5c2622c (state: ACTIVE (5), host: 192.168.13.23, md: naa.600605b006f7f9701ab604452752b6b4, ssd: naa.600605b006f7f9701abe00d3102940d7)

################

13.22

################

/localhost/VSAN-DC/computers/VSAN Cluster> vsan.disk_object_info . naa.600605b006f7dae01ab65f7317f942d1

Physical disk naa.600605b006f7dae01ab65f7317f942d1 (5226984a-510a-dab6-86c0-1950f37a4ee2):

  DOM Object: 26533353-5452-fa2f-b0f6-0025b5c2622c (owner: 192.168.13.23, policy: hostFailuresToTolerate = 1)

    Context: Part of VM cvf1-wshark: Namespace directory

    Witness: 27533353-c48b-63b3-90bb-0025b5c2622c (state: ACTIVE (5), host: 192.168.13.22, md: **naa.600605b006f7dae01ab65f7317f942d1**, ssd: naa.600605b006f7dae01abde7e20e633fb2)

    RAID_1

      Component: 27533353-ba62-62b3-d5c4-0025b5c2622c (state: ACTIVE (5), host: 192.168.13.21, md: naa.600605b006f7fb701ab6548453137d63, ssd: naa.600605b006f7fb701abde07e0de6c455)

      Component: 27533353-eeb5-60b3-7ddc-0025b5c2622c (state: ACTIVE (5), host: 192.168.13.23, md: naa.600605b006f7f9701ab604452752b6b4, ssd: naa.600605b006f7f9701abe00d3102940d7)

  DOM Object: 29533353-76d4-69d0-5601-0025b5c2622c (owner: 192.168.13.23, policy: hostFailuresToTolerate = 1)

    Context: Part of VM cvf1-wshark: Disk: [vsanDatastore] 26533353-5452-fa2f-b0f6-0025b5c2622c/cvf1-wshark.vmdk

    Witness: 29533353-0247-fce9-c547-0025b5c2622c (state: ACTIVE (5), host: 192.168.13.22, md: **naa.600605b006f7dae01ab65f7317f942d1**, ssd: naa.600605b006f7dae01abde7e20e633fb2)

    RAID_1

      Component: 29533353-829d-fbe9-22fc-0025b5c2622c (state: ACTIVE (5), host: 192.168.13.21, md: naa.600605b006f7fb701ab6548453137d63, ssd: naa.600605b006f7fb701abde07e0de6c455)

      Component: 29533353-eebd-fae9-7ec5-0025b5c2622c (state: ACTIVE (5), host: 192.168.13.23, md: naa.600605b006f7f9701ab604452752b6b4, ssd: naa.600605b006f7f9701abe00d3102940d7)

################

13.23

################

/localhost/VSAN-DC/computers/VSAN Cluster> vsan.disk_object_info . naa.600605b006f7f9701ab604452752b6b4

Physical disk naa.600605b006f7f9701ab604452752b6b4 (52cd5b55-c160-3fd8-2285-cfadc7b5d11e):

  DOM Object: 26533353-5452-fa2f-b0f6-0025b5c2622c (owner: 192.168.13.23, policy: hostFailuresToTolerate = 1)

    Context: Part of VM cvf1-wshark: Namespace directory

    Witness: 27533353-c48b-63b3-90bb-0025b5c2622c (state: ACTIVE (5), host: 192.168.13.22, md: naa.600605b006f7dae01ab65f7317f942d1, ssd: naa.600605b006f7dae01abde7e20e633fb2)

    RAID_1

      Component: 27533353-ba62-62b3-d5c4-0025b5c2622c (state: ACTIVE (5), host: 192.168.13.21, md: naa.600605b006f7fb701ab6548453137d63, ssd: naa.600605b006f7fb701abde07e0de6c455)

      Component: 27533353-eeb5-60b3-7ddc-0025b5c2622c (state: ACTIVE (5), host: 192.168.13.23, md: **naa.600605b006f7f9701ab604452752b6b4**, ssd: naa.600605b006f7f9701abe00d3102940d7)

  DOM Object: 29533353-76d4-69d0-5601-0025b5c2622c (owner: 192.168.13.23, policy: hostFailuresToTolerate = 1)

    Context: Part of VM cvf1-wshark: Disk: [vsanDatastore] 26533353-5452-fa2f-b0f6-0025b5c2622c/cvf1-wshark.vmdk

    Witness: 29533353-0247-fce9-c547-0025b5c2622c (state: ACTIVE (5), host: 192.168.13.22, md: naa.600605b006f7dae01ab65f7317f942d1, ssd: naa.600605b006f7dae01abde7e20e633fb2)

    RAID_1

      Component: 29533353-829d-fbe9-22fc-0025b5c2622c (state: ACTIVE (5), host: 192.168.13.21, md: naa.600605b006f7fb701ab6548453137d63, ssd: naa.600605b006f7fb701abde07e0de6c455)

      Component: 29533353-eebd-fae9-7ec5-0025b5c2622c (state: ACTIVE (5), host: 192.168.13.23, md: **naa.600605b006f7f9701ab604452752b6b4**, ssd: naa.600605b006f7f9701abe00d3102940d7)

Tags (1)
Reply
0 Kudos
1 Solution

Accepted Solutions
CHogan
VMware Employee
VMware Employee
Jump to solution

Yep - this is perfectly normal output.

As Duncan states previously, your simply displaying the same information each time your run the command.

And each object comprises of 3 components (2 x RAID-1 replicas + the witness).

The other thing to watch out for is that this command is showing more information that you requested, i.e. all components. However the components that are active for the disk that you put as an argument to the command will be wrapped in **double-asterix**.

For example:

DOM Object: 26533353-5452-fa2f-b0f6-0025b5c2622c (owner: 192.168.13.23, policy: hostFailuresToTolerate = 1)

    Context: Part of VM cvf1-wshark: Namespace directory

    Witness: 27533353-c48b-63b3-90bb-0025b5c2622c (state: ACTIVE (5), host: 192.168.13.22, md: **naa.600605b006f7dae01ab65f7317f942d1**, ssd: naa.600605b006f7dae01abde7e20e633fb2)

HTH

Cormac

http://cormachogan.com

View solution in original post

Reply
0 Kudos
7 Replies
depping
Leadership
Leadership
Jump to solution

I am not sure I am following you. You are doing a "vsan.disk_object_info" of a disk of a specific host I am guessing? What happens is that is finds components of your object and displays all that info. As all three hosts have components you see the same info every time? Look at the "Dom Object" IDs.

Duncan

------------

Book out soon: Essential Virtual SAN: Administrator's Guide to VMware VSAN (VMware Press Technology)

CHogan
VMware Employee
VMware Employee
Jump to solution

Yep - this is perfectly normal output.

As Duncan states previously, your simply displaying the same information each time your run the command.

And each object comprises of 3 components (2 x RAID-1 replicas + the witness).

The other thing to watch out for is that this command is showing more information that you requested, i.e. all components. However the components that are active for the disk that you put as an argument to the command will be wrapped in **double-asterix**.

For example:

DOM Object: 26533353-5452-fa2f-b0f6-0025b5c2622c (owner: 192.168.13.23, policy: hostFailuresToTolerate = 1)

    Context: Part of VM cvf1-wshark: Namespace directory

    Witness: 27533353-c48b-63b3-90bb-0025b5c2622c (state: ACTIVE (5), host: 192.168.13.22, md: **naa.600605b006f7dae01ab65f7317f942d1**, ssd: naa.600605b006f7dae01abde7e20e633fb2)

HTH

Cormac

http://cormachogan.com
Reply
0 Kudos
steveyoung
Contributor
Contributor
Jump to solution

Thanks for the help gentlemen - that clears up my confusion. I figured since the command vsan.disk_object_info took a specific disk ID as an argument, it was showing the object info for that particular disk, not all objects in the datastore.

I'd like to ask a quick follow up...If I change the storage policy for this VM from default (=1) to stripe-width=4, nothing happens with this output; it is exactly the same. Shouldn't I see the components striped across multiple disks being reflected in the output of this command? Instead, it still shows a single MD per component.

Thanks

Steve

Reply
0 Kudos
CHogan
VMware Employee
VMware Employee
Jump to solution

You need to apply the policy for the change to take effect. It should be one of the icons in the VM Storage Policy view.

http://cormachogan.com
Reply
0 Kudos
steveyoung
Contributor
Contributor
Jump to solution

I think I have: (?)

Screen Shot 2014-04-16 at 2.01.26 PM.png

Reply
0 Kudos
steveyoung
Contributor
Contributor
Jump to solution

I believe I figured out what I was doing wrong - not setting up FTT as part of the rule set for my policy. Nevermind, and thanks again for the replies.

steve

Reply
0 Kudos
steveyoung
Contributor
Contributor
Jump to solution

Book looks great Duncan. Was able to get a sneak peek on Safari, and find some very useful info about what I'm seeing.

Cheers

Reply
0 Kudos