Hello,
Let's say i have a server that i used as a file server with 700GB of data; what is the best way to virtualizes that machine when it regards the large disc of data. Should i use RDM, iSCSI solution or just virtualizes all of the discdata to disk file?
Tips and ides is appreciated?
Best regards
Leif
We usually use RDM when working with anything larger than 500GB.
Although I have seen vmdks much larger that operate with no issues.
Thanks BryanMcC for the quick reply.
Is 500GB VMware size storage guideline for RDM? The thing is i seek after a simple solution? When use of RDM isn’t often used for getting good disc performance?
There are no guidelines per se for RDM. I have a vmdk with over 650GB and it is simple and easy to manage. If you dont foresee the size of the disk outgrowing your VMFS3 volume and you have plenty of space on your VMFS3 volume. Stick with the vmdk if that is what you feel comfortable with. These vmdk files can be up to 2TB.
Let me add a little bit more.. If you dont need SCSI or SAN tools from your VM or you are not using MSCS you can have vmdk files up to 2TB. Which is also the limit of your VMFS3 volume minus extents.
Thanks for the advice. What about backup? What if i don't use VCB; then i must backup vmdk file even if i only want to make an incremental backup.
Any solutions for that?
Are you saying that you are using a linux based client insatlled on the Service Console for backup?
We just use an agent stored on the VM for backup and back them up just like physical machines... For now.
Management is usually just fine with this method because it is easy to understand. VCB is in the works here but we do not backup from the service console.
No it's a windows based machine I’m just thinking ahead; i wanted to minimize the network load when use of backup instead of the traditional way.
Anyway thanks for the advice.
You would have to use VCB to get this off of the network.. As far as I know besides some SAN utilities it is the only LAN free based back.
Anyways.. Good Luck.
Data dedupe products like DataDomain, Avamar or NetApp's SnapVault NearStore are your solution for backing up large VMs. VCB really isn't the answer since although it saves you ESX CPUs, you are still loading the SAN to backup your entire 1TB vmdk. Some of the dedupe products will only backup the changed files on subsequent backups, but still build a full backup that is stored on disk or tape. Since only 2% of the data changes on average, you'll only need to back up 20GB instead of 1TB and you'll still have a full backup!! The technology looks really promising.
I don't have any real world experience with these products, but I have stayed at a Holiday Inn Express and have seen the 'Markatecture' slides!