hi,
I have been trying to create a virtual machine in VI3 that has a 400 gb virtual disk. however I constantly get a message saying "the disk capacity entered was not a properly formed number or was out of range. it has been replaced with the nearest acceptable value."
VI3 then creates a 256GB virtual disk. Is that the maximum size for vm disks?
How do I create a 400 gb plus virtual disk.
thank you.
When you format a VMFS partition you have an option to specify the block size which determines how large of a VMDK file you can create. The default block size is 1 MB and you have the option of 2 / 4 and 8 MBs as well. To change the block size you have to format the partition again.
Block size Max VMDK size
1 256 GB
2 512
4 1024
8 2048
When you format a VMFS partition you have an option to specify the block size which determines how large of a VMDK file you can create. The default block size is 1 MB and you have the option of 2 / 4 and 8 MBs as well. To change the block size you have to format the partition again.
Block size Max VMDK size
1 256 GB
2 512
4 1024
8 2048
It's because the default block size is 1MB and that gives you a max disk size of 256GB
is there any reason why you would choose 1mb over 4mb? does this affect performance?
I don't think it affects the performance of the system, it's just about using the space more efficiently. If you're going to create small files and use a large block size they'll be lots of wasted space.
is there any reason why you would choose 1mb over 4mb? does this affect
performance?
In theory it should, but in practice I am yet to be convinced.
See my posts in this thread (it a long thread, I should be in the first few pages).
http://www.vmware.com/community/thread.jspa?messageID=584154򎧚
I did extensive testing with each block size on the same hardware and saw little change.
Dave
it's just about using the space more efficiently. If you're going to create small
files and use a large block size they'll be lots of wasted space.
Ah ha, that's what I thought too....but, there is a document somewhere on the VMware website that explains the logic...and when you work it out...it's negligible.
However there MUST be a reason..otherwise Vmware wouldn't give us the option! I am going to TSX next week (Sydney, AU) - I will trying and corner someone and get an answer.
Anyone else know?
Dave
\[Warning - this is conjecture not known fact.]
But how many people create lots of small files on a vmfs partition?
Not many I would guess, normally the vast majority of space used is in the xGB .vmdk files. Now, the block size of the guest OS partition will affect how much space is wasted by small files on their virtual drive but that is unreleated to the block size of the vmfs partition.
As for performance, I think it will probably depend on the storage architecture being used. Isn't it the case that in an ideal world the block size should be be the same size as
RAID stripe x writeable Spindle count
For example, 9 disks in RAID5 with a 256KB stripe size. Here 1 stripe across all spindles stores 2MB, therefore best performance would be achieved with a vmfs partition set at 2MB block size. A vmfs block of 1MB wouldn't use the full bandwidth to the disk on each write cycle, and a vmfs block of 4MB would require 2 write cycles to store each block.
Thanks everyone - for this post. It helped us resolve our problem.
No but for future expansion, I am making ALL my LUNS and VMFS file systems 4 Meg block size (we won't go over 500g files) so that we need the use of larger VM's, we won't be limited.
I did see a slight performance increase during alignment ( you can search for this on this forum, there is EXTENSIVE discussion) and you will understand, that when you align the block size in VMFS with the OS block size, they tend to work better together...
So 4Meg did yield better performance for SOME VM's, and the users said it was faster..
But how many people create lots of small files on a vmfs partition?
VMware does it for you!
\# ls -lh
total 5.6G
-rw-rr 1 root root 28K May 21 07:55 vmware-13.log
-rw-rr 1 root root 22K May 21 11:19 vmware-14.log
-rw-rr 1 root root 25K May 21 12:21 vmware-15.log
-rw-rr 1 root root 23K May 31 07:46 vmware-16.log
-rw-rr 1 root root 20K May 31 11:12 vmware-17.log
-rw-rr 1 root root 26K May 31 11:28 vmware-18.log
-rw-rr 1 root root 4.6K May 31 11:32 vmware.log
-rw-rr 1 root root 37 May 18 07:19 XXXXXX-7b0c1b04.hlog
-rw------- 1 root root 512M May 31 11:11 XXXXXX-e8d4faab.vswp
-rw------- 1 root root 5.0G May 31 07:46 XXXXXX-flat.vmdk
-rw------- 1 root root 8.5K May 31 11:28 XXXXXX.nvram
-rw------- 1 root root 337 May 31 11:28 XXXXXX.vmdk
-rw------- 1 root root 0 May 18 07:23 XXXXXX.vmsd
-rwxr-xr-x 1 root root 1.8K May 31 11:27 XXXXXX.vmx
-rw------- 1 root root 250 Jul 21 01:42 XXXXXX.vmxf
#
To me, only two of these files do not qualify as 'small'.
On the other hand - VMFS-3 does support 'sub-allocation'.
A vmfs block of 1MB wouldn't use the full bandwidth to the disk on each write cycle,
Nice theory. No offence, but I don't think the VMkernel waits until it has received a VMFS block worth of data to write. It MUST NOT write-cache any data from the VMs.