Hi,
back from holiday I found that a colleague had added 20 hard disks to a Linux VM. I haven't investigated yet why the developer has asked for that many disks, but I feel kinda disturbed by having to manage them, and I wish I were able to give good reasons why this is bad.
What are the pros and cons of having 20 small vmdks (in the range from 2 to 20 GB each) instead of one big file, apart from my desire of clean, easy-to-manage VI?
Thanks
disadvantages would includes increased SCSI reservations as well. And its PITA the manage. And will limit your ability add other devices (NICs, etc) to the VM (you're getting close to the device limit for VM HW 7).
This does not sounds like a good idea at all. Ask your Linux guy why he needs so many disks.
Perhaps it would be better to create separate VM's for each of the major functions that this server will perform?
If this is all for a single application server of some type then you need to ask the hard question; what developer in their right mind would build an application that needs to span across so many partitions?
Regards,
Paul
it should just create a large disk with many partition for it. it's really pita to manage 20 disks (to manage, backup and etc) .. if the reason of using so many disks so that he can revert back his work and etc, you can advise him on using snapshots ..
@mcowger:
Infact he already had two Virtual SCSI adapter created, being 15 vmdk the limit for a single adapter. I spoke with the developer today, he wanted to have separate disks for system, logs, indexes and other stuff, but somehow lost control of the situation and started adding disks just to get more space.
Now, taken that he's willing to step back, what would be the best way to consolidate disks?
Thanks