I'm curious what other orgs' hardware replacement strategy is and how it's affected by virtualization. Ours is greatly affected by virtualization and the fact that our virtual environment is continually getting more and more dense (more VMs crammed into a smaller and smaller footprint). To keep it simple, let's keep all discussion to x86 servers, storage, and network infrastructure (no pseries, LPAR virtualization, etc).
Issues/values we consider include:
Are there other issues you consider in your org? What pain points and conflicting issues do you find come up? One for us is this: how do we be good stewards with slightly older servers that aren't vmotion compatible with newer, more dense servers? We've actually found that perfectly good blades that are 4-5 years old are actually less valuable than the air inside an empty blade chassis slot (in other words, the value of having an empty blade slot for a future, new blade is greater than these older blades that are prohibitively expensive to upgrade.
Thanks for the feedback.
Hardware refresh has become much simpler with virtualization - when staying at the same release level of the hypervisor all you need to is present the shared storage to the new ESXi hosts and vmotion to the new server if the CPUs are compatible - if thery are not you can eitehr build new clusters with EVC enabled or VM by vm power it down adn remove it from the old environment and add it to the environment -
If you are also upgrading to a new version of vSphere VMware has done a good job of letting the new version manage older versions of vSphere
I can appreciate your need to want to think this through, but like the other poster inferred, its not a big deal. As long as your cluster is sized for at least N+1 from a host perspective, then as they age, replace them as needed. Rolling hardware upgrades happen all the time, and they VMs none the wiser. Their CAN be some benefit to keeping the hardware as similar as possible (chipset features, and calculating HA N+1 numbers when you have similar amounts of RAM, and compute)
I recently had a situation where I replaced 4 of my 7 nodes from old harpertown based Intel processors to Sandybridge processors. Double the amount of physical cores (8 compared to 16), quadrupal the amount of logical cores (8 compared to 32), and 6 times the mount of RAM. That progress easily absorbed the internal demands. It was such a significant change that it made a legitimate argument for not using up valuable vSphere licenses on older physical hosts. It's just not worth it.