VMware Cloud Community
gbmanikandan
Contributor
Contributor
Jump to solution

server consolidation and required hardware/software

Let me give a brief on the current and what we would need:

We have Oracle/hyperion application running on all 8 servers DL585G2 (

(Dual-Core) AMD Opteron TM Processor 8220 - Logical CPU# 😎 and each server with 8G RaM and

I am trying to bring this down to 3 ,after either using a BL465/DL585 G7.

Either I can go for

DL585 G7(4 x12 core ) - 1 no. 64Gb

BL465/DL585(2 x12 core ) - 2 no. 48 each.

or BL465 (2*12 core) - 4 no. 190GB RAM

The data resides on the SAN storage. I would also like to include a failover with VM and dynamic resource scheduling.

Futher clarifications;

How do I setup a VM failover for the 3/4 servers with production - can the failover be on alternate servers?

How does the SAN relate on Dynamic Resource Scheduling?

Do you have any example architecture diagrams on these, any white papers?

Why do we need I/O storage control and network data control. These are only with the enterprise plus vpshere ?

0 Kudos
1 Solution

Accepted Solutions
azn2kew
Champion
Champion
Jump to solution

VMware vConverter Standalone is free and works perfectly and you can P2V any servers you wish but remember Domain Controllers suggested to build new virtual machine and fine tune it first and DCPromo it. Depends what servers/apps you have, planning is the key, and for DB and Exchange make sure all services stop and databases itself stop transaction. If you can build new VM and than migrate users inboxes and databases would be cool, depends how lar ge your VMs could take time.

Your consolidation ratio depends how big your physical servers and if you have 24 cores + 256GB you can virtualized a lot of VMs and all depends on workload and IOPS, make sure your SAN can handle the IOPS otherwise performance issues and lead to iSCSI locks.

Most standard servers running from 2GB-4GB and for high end between 4GB-32GB so do your math and consolidate it nicely, also monitor vCenter performance and if its saturated VMotion them and enable DRS. For networking, you need at least 2 x 1GBe NICs attached to every port groups.

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!

Regards,

Stefan Nguyen

VMware vExpert 2009

iGeek Systems Inc.

VMware, Citrix, Microsoft Consultant

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!! Regards, Stefan Nguyen VMware vExpert 2009 iGeek Systems Inc. VMware vExpert, VCP 3 & 4, VSP, VTSP, CCA, CCEA, CCNA, MCSA, EMCSE, EMCISA

View solution in original post

0 Kudos
3 Replies
azn2kew
Champion
Champion
Jump to solution

We're currently running a bunch of HP DL585 with 256GB each and they hardly has any issues so that's perfect for VMware HA, DRS, VMotion, FT enabled clusters. The only way to go with Blade options is that you have chassis in place and want to reduce data center footprints, and it only works great when you have bunch of blades and fully utilize it otherwise you pay extra costs such as Flex 10 modules, HP c3000 chassis and other components/supporting.

You can failover any virtual machines you want to any specific vSphere host within the cluster using anti-affinity settings rule. Your DRS cluster should dynamically utilize your memory utilization and dynamically VMotion those virtual machines evenly across the hosts within the cluster and you can set it to less or more aggressive if you want. DRS doesn't utilize the SAN, it only manages your memory within the DRS clusters more effectively. Storage VMotion is where your SAN comes in place, which will allow you to migrate live virtual machines and its disks/VMDK to other SAN and its good for maintenance or SAN upgrades.

If you want hardcore architecture and great Visio diagram, please check out www.hypervizor.com ->Hany Michael one of the best Visio creator out there.

I/O and Network data control helps you to throttle certain IOPS and network bandwidth transmitted/received by the VM. It is important if you have IOPS and network saturation and you want it under control.

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!

Regards,

Stefan Nguyen

VMware vExpert 2009

iGeek Systems Inc.

VMware, Citrix, Microsoft Consultant

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!! Regards, Stefan Nguyen VMware vExpert 2009 iGeek Systems Inc. VMware vExpert, VCP 3 & 4, VSP, VTSP, CCA, CCEA, CCNA, MCSA, EMCSE, EMCISA
gbmanikandan
Contributor
Contributor
Jump to solution

Thanks for your response ,very useful.

1. I am going for DL585G7, are there any servers which can take more processors, the max. support here is 4*12 core?

2. I need to migrate all the physical hosts to vmware, does vmware convertor work flawlessly? Does it come included with Vsphere Enterprise? or a separate license?

3. I would like to run 3/4 VM hosts per machine, how many Gigabit ports I would need for a ESX host? which will include redundant.

0 Kudos
azn2kew
Champion
Champion
Jump to solution

VMware vConverter Standalone is free and works perfectly and you can P2V any servers you wish but remember Domain Controllers suggested to build new virtual machine and fine tune it first and DCPromo it. Depends what servers/apps you have, planning is the key, and for DB and Exchange make sure all services stop and databases itself stop transaction. If you can build new VM and than migrate users inboxes and databases would be cool, depends how lar ge your VMs could take time.

Your consolidation ratio depends how big your physical servers and if you have 24 cores + 256GB you can virtualized a lot of VMs and all depends on workload and IOPS, make sure your SAN can handle the IOPS otherwise performance issues and lead to iSCSI locks.

Most standard servers running from 2GB-4GB and for high end between 4GB-32GB so do your math and consolidate it nicely, also monitor vCenter performance and if its saturated VMotion them and enable DRS. For networking, you need at least 2 x 1GBe NICs attached to every port groups.

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!

Regards,

Stefan Nguyen

VMware vExpert 2009

iGeek Systems Inc.

VMware, Citrix, Microsoft Consultant

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!! Regards, Stefan Nguyen VMware vExpert 2009 iGeek Systems Inc. VMware vExpert, VCP 3 & 4, VSP, VTSP, CCA, CCEA, CCNA, MCSA, EMCSE, EMCISA
0 Kudos