VMware Cloud Community
PRODECO
Contributor
Contributor

VDR 2.0 cifs errors

Hi.

I have upgraded from VDR 1.2 to 2.0 and have problems with dedupe store on cifs share on Windows 7. Previous VDR had no problem with backup on cifs share, but new version throws me this error in VDR console:

CIFS VFS: No response to cmd 5 mid 3473
CIFS VFS: Send error in Flush = -11
CIFS VFS: No response to cmd 47 mid 4004
CIFS VFS: Write2 ret -11, wrote 0
CIFS VFS: Write2 ret -11, wrote 0
CIFS VFS: Send error in Flush = -9

CIFS VFS: Send error in Flush = -9

and last message repeats again and again...

Tags (3)
Reply
0 Kudos
6 Replies
satya1
Hot Shot
Hot Shot

PRODECO wrote:

Hi.

I have upgraded from VDR 1.2 to 2.0 and have problems with dedupe store on cifs share on Windows 7. Previous VDR had no problem with backup on cifs share, but new version throws me this error in VDR console:

CIFS VFS: No response to cmd 5 mid 3473
CIFS VFS: Send error in Flush = -11
CIFS VFS: No response to cmd 47 mid 4004
CIFS VFS: Write2 ret -11, wrote 0
CIFS VFS: Write2 ret -11, wrote 0
CIFS VFS: Send error in Flush = -9

CIFS VFS: Send error in Flush = -9

and last message repeats again and again...

Hi hope this two link will help you

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=103785...

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=103732...

Yours,

Satya

Reply
0 Kudos
jim3cantos
Contributor
Contributor

Same problem here.

We have two vdr 2.0 instances with two CIFS share each and the four shares are in the same virtual server. We are getting also sometimes error:

"Trouble writing to destination volume, error -102 (I/O Error)" (KB Article: 1037327)

From KB Article: 1037855, it seems that we have a not recommended configuration:

VMware does not recommend using CIFS shares that are:

  • On a server that has another role such as CIFS shares on a vCenter Server
  • Connected to a virtual machine
  • Being used by another VDR appliance(s)
  • Shared to multiple services or servers

Before splitting in two the server with the shares we are going to try to avoid concurrent activity from the two vdr instances and see what happens...

Reply
0 Kudos
jim3cantos
Contributor
Contributor

Follow up: Errors continued and at the end we decided to discard CIFS shares completely and go back to local 1TB disks in the appliance. The only reason to use CIFS was to do D2T backup and this can be done also this way in our case.

Reply
0 Kudos
arinath
Contributor
Contributor

Have to say, VDR is/was a horrific stupid mess. Untill I actually fixed it.

After a couple months of watching my backups fail and then having to remove the backups and start over,

Of watching the vdr appliance's CONSTANT spewing of CIFS errors, I  decided I had to man it up a bit, and logged into thier console

(VDR appliance turned out to be running under centos5, why they HELL did they switch to Suse for vdp???)

and did some serious digging into the guts of the thing. After trying various things, I DID eventually get it to behave.

In fact I was able to have problem free backups for the next 8  months, for our entire cluster, (~20 servers) in a single 2 TB  datastore.

(Yes, virginia, the reason they say use only 500gb is because they stupidly allow only CIFS)

untill we switched to Acronis vmProtect. (Which is a good product. I do get some errors but mostly with vss)

For those still cursed with a VDR and it's constant failures and BS, I can tell you how to actually FIX vdr.

I have implemented it at several locations now and have had no issues whatsoever with it. Runs much faster as well.

I am sure many people will tell me I know not whereof I speak,

But for those who are still banging thier heads against VDR, Try the following:

1: Go into your VIclient and remove any existing VDR destinations.

2: Set up your backup repository as an NFS share, NOT smb.

     (Clear out the almost certainly corrupt backups that will be left behind)

3: Log into the VDR appliance console, default username is root, default password is vmw@re.

4: Issue the following commmand to install the NFS client:

          yum install nfs-utils nfs-utils-lib (Internet access required, obviously)

5: Create a mountpoint somewhere, I used /backup1 (mkdir /backup1)

6: Create an fstab entry so that it mounts your NFS to your mountpoint on boot,

     So edit the /etc/fstab file using nano(yum install nano), vi, whatever, adding the line:

     10.10.10.62:/vmdr           /backup1               nfs                rsize=8192,wsize=8192,timeo=30,intr

     (ServerIP:NFSshare)     (Your mount point)  (share type)  (Options, which may differ according to network)

7: Then reboot the appliance. (I use reboot -f because it takes freaking FOREVER for thier stupid services to stop)

8: Now you may go into your destanations and see the share you just mounted. Run an integrity check. Watch it zip up to 100%.

9: Create backup jobs pointing at it.

Every time I have done this it's like magic, and all of a sudden VDR is not shitty anymore

Why the supposed geniuses at vmware couldn't work this one out, I have no clue. anyone can tell you that NFS>SMB any day anyway.

I never yet posted this mainly because I don't regard it as MY job to fix something THEY charged a LOT of money for.

I believe that if you are thinking you want money, you can write your own damn KB, I ain't doing it for you for free.

I should bill them for fixing thier product, in fact.

Reply
0 Kudos
fgallego
Contributor
Contributor

Hello arinath, I followed what you did, however when I tried to run an integrity check i got an error message "The integrity check failed because there was a problem with the destination"


And because of this error I couldnt run the backups jobs...


By the way, even it's mounted as NFS share, it's shown as "Network share" in "Destinations". It's correct, isn't it?


Regards

Reply
0 Kudos
arinath
Contributor
Contributor

@fgallego

Actually no, When you mount a given NFS share onto the VMDR appliance, it is perceived as a LOCAL storage device.

VMDR should see/treat it as if it were a drive installed inside that machine.

Mind you this procedure was created and tested on VMDR, so if you're using the new VDR I expect it will not work.

As far as things that could have gone wrong, I shall for the nonce assume you did not paste the fstab line as it is,

(You need to edit it to reflect the needs of your network)

Also important is that the NFS share you created be EMPTY when you initially set it up.

I know that these responses may not be that helpful, but we ARE limited in terms of the lack of good error reporting in VMDR.

I mean 'Problem with destination' Wow, that's useful. Thanks, I would have never guessed that it had a problem because it failed Smiley Happy

Without more info it's quite difficult to diagnose, and all I can do for you is venture some guesses.

Did you see anything more informative in the console error display?

You can get more useful information from using the dmesg command in the console, and by 'moreing' the log files, which reside nominally in /var/log/nfsd

(You MAY need to increase logging verbosity on NFS, which you should be able to do in /etc/nfs/nfslog.conf)

Were any errors spawned when installing NFS to the VMDR appliance?

Can you tell us what kind of device your NFS share resides on? (I.E. NAS appliance, Linux Box, Windows Box, you did not say...)

The fstab entry may vary WILDLY in terms of the 'share a address' between some products....

For instance, if you are using a QNAP NAS device it would be something like: [IP of QNAP]:/share/[name of share you created] instead of the syntax outlined above

And no, this 'workaround' while having always worked awesomely for me, is NOT what vmware would call

a 'supported solution', but I will help if I can Smiley Happy

Reply
0 Kudos