Moltron83
Contributor
Contributor

Forgot to disable automount - Can't determine if there is a problem...

Jump to solution

Hey, so I did a no-no and let my Backup Exec server see my ESX LUNS / Data stores on my fibre channel SAN without disabling automount.  This is a new BE server, and after testing VEEAM out (which took care of this automatically) setting this slipped my mind.

My setup:  Win 2k8 R2 for Backup Exec server and vSphere 4.0 U2 with an EMC FC SAN.

I was quickly setting it up, and worrying more about the fabric zoning and ended up adding the server to the SAN fabric and then adding it in the host lists on the EMC exposing everything to the server.   After a bit I realized I had never issued that command to disable automount on the server!

I quickly took the BE server out of the FC zones so it couldnt see the LUNS anymore.  Then issued diskpart automount disable and then did a scrub.  I rezoned the server back in so it could see the datastores.

Things seem fine.  My datastores did not error or anything in VMware, they all still exist.  Also on one of my ESX hosts I ran an fdisk -lu to find that everything still read VMFS as its type, so I dont think that Windows overwrote anything...  The thing is that SAN transport backup works but it seems rather slow, 1GB/minute.  My test server before that (A Windows 2k3 server) got about 3.5GB/min over SAN.  Another thing that worries me is that the old test server picked up on the data stores in Disk Management as "Healthy (Unknown Partition)" with no drive letter for each one.  My Win2k8R2 box shows them as "Healthy (Primary Partition)" with no drive letter.  I don't know if that is a difference with how Win 2k3 and 2k8 display things or what.  For that reason I think maybe something is still not right...

Has anyone been in this situation?  Can anyone verify that their datastores appear as mine do (as primary partitions) in a Win2k8 R2 box when viewing from Disk Management?

-Thanks

0 Kudos
1 Solution

Accepted Solutions
opbz
Hot Shot
Hot Shot

if you get really worried you can also look at this from the console...

this will work on regular ESX have not tried it on esxi... kinda have doubts it will work though...

from the console first you do fdisk -l that will give you a /dev/sda1 for your vmfs volume

you then do

hexdump -C | less against the /dev/<scsi device> on which the VMFS should reside.

this gives you a 2 pane view that is block level of the lun. Scrolling down you will see the signature of the vmfs volume... If you see it you are fine. if not then you have issues.

I have seen luns that windows hasw signatured and stuffed and the signature was not visible...

anyways sounds like you got lucky

View solution in original post

0 Kudos
4 Replies
vmroyale
Immortal
Immortal

Hello.

You would have known right away if Windows had written signatures to those LUNs, so it sounds like you escaped without incident.  Windows 2008 does show the disks as online healthy partitions, so that part is correct.

Good Luck!

Brian Atkinson | vExpert | VMTN Moderator | Author of "VCP5-DCV VMware Certified Professional-Data Center Virtualization on vSphere 5.5 Study Guide: VCP-550" | @vmroyale | http://vmroyale.com
Moltron83
Contributor
Contributor

Thanks for confirming the partitions.  That was the peice that bugged me the most since it was the only thing different from the old Win 2k3 test server and the production 2008 server.

0 Kudos
opbz
Hot Shot
Hot Shot

if you get really worried you can also look at this from the console...

this will work on regular ESX have not tried it on esxi... kinda have doubts it will work though...

from the console first you do fdisk -l that will give you a /dev/sda1 for your vmfs volume

you then do

hexdump -C | less against the /dev/<scsi device> on which the VMFS should reside.

this gives you a 2 pane view that is block level of the lun. Scrolling down you will see the signature of the vmfs volume... If you see it you are fine. if not then you have issues.

I have seen luns that windows hasw signatured and stuffed and the signature was not visible...

anyways sounds like you got lucky

View solution in original post

0 Kudos
Moltron83
Contributor
Contributor

I called VMware support on this to be sure.   They took similar steps with the hexdump that you described.

Basically the steps were:

  • Putty into a host as root, then issue an "su -" command.
  • Navigate to vmfs/volumes
  • do an LS here to get a listing of the volumes.  Copy the one in question.
  • do a "vmkfstools -P [paste your copy from above here]
  • Copy the naa. number all the way up to the colon I think
  • run the hexdump -C /vmfs/devices/disks/[paste the naa. here] | less
  • then do a /1310000 to search for this line
  • view the signature.  I dont know exactly what support looked for here, but I could clearly see the name of the datastore in the lines around this one.

Anyhow they said the datastores were unaffected.  Good news for me :smileygrin:

0 Kudos