VMware Cloud Community
JasonCzerak
Contributor
Contributor

Validated configurations. Dev, QA & Production. Audits and Proof

I'm in need of information, white papers would be awesome, that show auditors that utilizing Vmware to host a validated QA guest OS is not going to effect production.

The general rule of thumb for a regulated type of environment is "QA hardware has to be comparable to production hardware". For example, I do not need a DL585 with 10 SCSI disks and 16gig's ram and 4 CPU's for QA since that's what production requires. right?. Makes logical and economical sense when we deal with Web applications (apache, perl and oracle). In this case, QA is a DL145 with one CPU and 2 gig's ram and a mirror. This is currently in place right now. Development is on a DL145 loaded up with Vmware Server (soon to be ESX) running 6 other environments that make up Development and other side environments.

A new project in the works will be on a different platform (windoze, IIS oracle). We are already purchasing a small farm of DL145's for Vmware for migration of desktop servers and old Sun stuff. This new project is small and low resources, the production server will be a DL145. Dev will be done up in a VM naturally. But as far as QA, I would like this on VM as well!

Now the question here is. How do I PROVE that the QA environment in Vmware (on the same hardware, with the VM layer on top) is the same as production? We can already prove a DL145 is the same as a DL585, it's the same "class" of hardware. A DL585 and a HP proliant server would not work in a QA/Prod transition for example.

Software driver differences, like software mirror, vs hardware raid controller is "ok". Short of that the servers are rather close. In VM's you get the VMware drivers, but you do see an AMD or Intel chip and all it's features.

in the end, it's different, but it's really not all that much different like a DL145 and a DL585 is. How do I argue this?

Reply
0 Kudos
4 Replies
kix1979
Immortal
Immortal

The underlying hardware is stripped from the VM, so it sees a common set of drivers with exception to the CPU. The CPU will be whatever the physical host has, so all the features like SSE3 etc... The network, disk and other drivers are presented the same regardless of HP, Dell etc... hardware.

If the drivers and what the VM has were different on each host, the VM would require a reboot and new drivers when you vMotion or cold migrate to a new host. Do a cold migration to a new hardware platform and show them there are no new drivers etc...

Thomas H. Bryant III
Reply
0 Kudos
JasonCzerak
Contributor
Contributor

I understand that aspect. But how do you agrue with an auditor that thoses simple drives, tho totaly different or not there (hardware raid for example). do not matter with a vm environment?

Reply
0 Kudos
Ken_Cline
Champion
Champion

It really boils down to the fact that the application runs on Windows (or Linux, or whatever), not on the hardware. Regardless of what HW you are running on, there are several layers of abstraction between your application and the physical box - there's the BIOS and then a whole assortment of device drivers. It's a rare application today that actually interacts directly with the HW.

If the various device drivers all comply with the same standard (NDIS / ASPI / whatever), then the application won't know the difference...

Ken Cline VMware vExpert 2009 VMware Communities User Moderator Blogging at: http://KensVirtualReality.wordpress.com/
Reply
0 Kudos
oreeh
Immortal
Immortal

If this would matter in a vm environment, it would matter in a physical environment too (with much more consquences).

Does your browser stop working after you change the NIC? No.

Does your XP box stop working after adding memory? No (it might require reactivation, but that's a totally different thing).

Reply
0 Kudos