I have read some articles about datatore sizing and what I understand is to have datastore size 600 Gb and less it gives optimal performance and also 15 VM or less, per datastore is again a good practice.
It will be great if I can get more inputs on this.
and also what about internal storage,instead of 1.5 TB should I have 600GB each for VMFS.
I really think it depends on the type of storage you are using.
Next to that it depends on the type of servers you are running ( Database servers or file servers or just a domain controller )
I mean some servers don't have that much i/o activities, instead of others.
Tell us more about your infrastructure and type of servers you are running.
I was discussing about same topic with EMC vSpecialist last week at VMworld.
There is couple factors in datastore sizing.
One is VMFS metadata locking during which all IO to a datastore is blocked. VMFS lock is issued any time there is a new file to be written on a VMFS, this happens during VM power on, power off, vMotion, Storage vMotion, VM cloning, VM deployment, etc. VMFS locking is mostly an issue on VMware View deployments where you have lots of workstations booting or being deployed. If you have vSphere 4.1 and storage device with VAAI support then this VMFS locking is not an issue at all.
Another issue is storage device queues which in extreme situations could get full and cause contention. I would say that this is so theoretical problem that you should not worry about it.
So, unless your VMs are rebooting constantly or have otherwise lots of actions going on which cause VMFS locking then I'd say that go with maximum size datastores (ESX supports 2TB - 512bytes).
VCP3, VCP4, VSP4, VTSP4
As already posted by nofragger, it really does depend on what you're using for a SAN/storage and what the VM's are, and are doing. If you have VM's that barely use storage, and are not under high demand, or don't need a lot of IOPS, then you can get away with placing more on a LUN. If, however, you have something like a SQL Server that gets slammed daily, and needs a healthy amount of space (hundreds of GB) then you'll need to place it on a LUN with less other servers, if not by itself.
The last recommendation I had for sizing of LUN's was plan on having 10-15 MAX per LUN. Size them accordingly so that you won't run out of space. IF you're using thin provisioning, keep the LUN size in line with how much space would be needed if the VM's were actually thick provisioned. Of course, you always want to keep at least a 10% margin of free space (better to have more like 25%+) for snapshots and such.
In production environments, I tend to go with ~500GB LUN's and follow the above stated parameters for keeping free space on each one. I also, typically, use thick provisioning over thin due to performance gains under thich (compared with thin) as well as being able to better manage the amount of free space on a LUN. This isn't to say that I wouldn't create a LUN that was larger than ~500GB, it would just be a special case, such as for a server presenting shared directories/applications that need that amount of space... OR, I'd simply create a LUN and use iSCSI to connect the LUN directly to the VM that is to share it out (thus being able to share out any amount of space needed, exceeding the 2TB-512B limitation).
IF you're not using higher performing storage/SAN, such as what's commonly available from companies such as EMC, EqualLogic, HP and even Dell, then I would size the LUNs smaller, and/or have less VM's on each one.
This is another case where I'm sure you'll get one of VMware's engineer's "typical" answers when asked about such things... The answer seems to end up being "it depends"... It depends on several factors, so there won't be a published hard standard number... Even if you have the exact same SAN as someone else, using identically spec. host servers, your LAN could be different, which will impact how many VM's you'll want to run on each LUN... Far too many variables...
Consider awarding points for "helpful" and/or "correct" answers.