crazex's Accepted Solutions

There really is not good, or easy, way to accomplish what you are asking. Your best bet is to set a schedule for when the servers will be rebooted. If Windows requires a reboot, it doesn't matt... See more...
There really is not good, or easy, way to accomplish what you are asking. Your best bet is to set a schedule for when the servers will be rebooted. If Windows requires a reboot, it doesn't matter if it is physical or virtual, it needs to reboot. I have a scheduled time every month when our servers are rebooted, to apply all recently installed updates. If I have to apply out-of-band updates, I coordinate the reboot so that it is after business hours, and does not really affect the end users. If you are using WSUS to do your updates, you can setup a GPO so that the machines will not automatically reboot, if that is what you are truely worried about. -Jon- VMware Certified Professional
As long as the CPUs are in the same series, which yours seem to be(23xx series), they will be VMotion compatible. And, as I stated before, if you are using at least Update 2, you can use EVC to ... See more...
As long as the CPUs are in the same series, which yours seem to be(23xx series), they will be VMotion compatible. And, as I stated before, if you are using at least Update 2, you can use EVC to insure that they are compatible, by setting the EVC for AMD processors. Just noticed that you were asking if you could use the 2380 instead of the 2384. That shouldn't be a problem. -Jon- VMware Certified Professional
VC is officially supported in a VM. I actually just migrated my VC from a physical server to a VM, without any problems. The only thing you'll have to remember is while doing maintenance you'll... See more...
VC is officially supported in a VM. I actually just migrated my VC from a physical server to a VM, without any problems. The only thing you'll have to remember is while doing maintenance you'll want to make sure the VM, which is running VC, is shutdown last. -Jon- VMware Certified Professional
I believe that Vizioncore's best practice, says to put vRanger on a physical Windows host. It works in a VM, but when you are doing backups, you will have a lot of network load on the ESX host t... See more...
I believe that Vizioncore's best practice, says to put vRanger on a physical Windows host. It works in a VM, but when you are doing backups, you will have a lot of network load on the ESX host that is running the vRanger VM. -Jon- VMware Certified Professional
You won't need to "pair" them per se. What you are going to need to do is run your ethernet cables to the 2 separate switches. So port A to switch A and port B to switch B. On the backend, you... See more...
You won't need to "pair" them per se. What you are going to need to do is run your ethernet cables to the 2 separate switches. So port A to switch A and port B to switch B. On the backend, your SAN controllers will be zoned to both switches. You'll have to configure your HBA so that it is in the IP range that the iSCSI zoning understands. Now after your SAN admins create your LUN, they should also be able to setup the multipath config for your host and then zone the LUN to it. After this is done, you should be able to scan your HBAs and see the new volume. Also, from what I remember, VMware does not currently support Active/Active connections, just Active/Passive. -Jon- VMware Certified Professional
ESX 3.5 and VC 2.5 have the functionality. -Jon-
This is correct. DRS is pretty much worthless without shared storage. If you don't have shared storage, the only piece of DRS that will work, is the initial placement of the VM. After that, ... See more...
This is correct. DRS is pretty much worthless without shared storage. If you don't have shared storage, the only piece of DRS that will work, is the initial placement of the VM. After that, it will never be able to move to another host, as all the data resides on a local disk. I'm not even sure why any document would state to use DRS without shared storage. I think most of the people posting here missed the point of your question. The answer is: Yes DRS can be enabled without shared storage, and you can add hosts to a DRS cluster without affecting the running Virutal Machines; however, DRS is pretty much worthless if you don't have shared storage, as the feature will not be able to migrate and VMs. So if it can't do this, it is no better than you manually evaluating your CPU and Mem load, and setting up a VM on the host that is the best candidate. -Jon-
Technically, yes, you can set it up this way. However, if you do set it up like this, don't plan on using any VM templates, as they will not work. This is a known issue with VC and VCB residing... See more...
Technically, yes, you can set it up this way. However, if you do set it up like this, don't plan on using any VM templates, as they will not work. This is a known issue with VC and VCB residing on the same server, and thus why it is not a "supported" configuration. Well after reading Tex's post, Is this something they fixed with the release of 3.02, 2.02, and 1.03? I was not aware of this. -Jon- Message was edited by: crazex
VCB uses the VCB framework to mount the snapshots, not the Windows mounting. -Jon-
The logos that you see there are the new logos that go along with the VCP on VI3. As far as I am aware, the usage guidelines are the same. -Jon-
We are also a Dell shop, and we are currently running 4x 2950's with the Intel X5355 quad core chips, 16GB of RAM, and 4 NICs. With a configuration like this you will run out of RAM resources mu... See more...
We are also a Dell shop, and we are currently running 4x 2950's with the Intel X5355 quad core chips, 16GB of RAM, and 4 NICs. With a configuration like this you will run out of RAM resources much faster than CPU, especially if all of your VMs can be single vCPUs. We are running our ESX cluster on a FC SAN, so we are able to get away with 4 NICs, though I would prefer at least 2 more. Since you are planning for iSCSI, I would start with no fewer than 6 NICs (integrated broadcom and 4 port Intel PRO 1000). You will use 2 of the NICs for iSCSI traffic, 1 for the Service Console, 2 for VMs, and 1 for Vmotion/SC failover. If most of your servers aren't very CPU intensive, you may want to look into the Woodcrest Dual Cores to save some money, as you'll be able to cut costs here, and then use the SAS drives for the OS. The MTBF on the SAS drives is much better than SATA. We are using a Compellent SAN, which gives us the ability to use both FC and iSCSI. I am not sure how cost effective this would be for your environment. I haven't heard many good things about the EMC AX150, but since you are planning for iSCSI, why not look into Equalogic? Many people on these boards use them, and they seem to be tied in pretty closely with VMware. I can't imagine that a 1-2TB Equalogic will be all that exepnsive, and the EQ will be much more scalable than the AX150. -Jon-