I need to change the SCSI controller on a couple of Windows 2008R2 machines to VMware Paravirtualized SCSI Controller, when I change and boot the server, it blus screens BSOD
I followed the steps described in the link below, but still the same problem.
http://www.vladan.fr/changement-from-lsilogic-paralel-into-pvscsi/
any help will be highly appreciated
Joseph
Make sure you have the paravirtualized scsi drivers installed on the guest, else it won't know how to reach it's disk.
They are available as a floppy image in the vmimages/floppies/ directory on a host.
Also make sure you meet the compatability matrix as in this KB:
http://kb.vmware.com/kb/1010398
Guest operating system | Data Disk | Boot Disk |
Windows Server 2008 and higher | ESX 4.0 | ESX 4.0 Update 1 |
Windows Server 2003 | ESX 4.0 | ESX 4.0 Update 1 |
Red Hat Enterprise Linux (RHEL) 5 | ESX 4.0 | not supported |
RHEL 6 and higher | ESX 4.0 Update 2 | ESX 4.0 Update 2 |
SUSE Linux Enterprise 11 SP1 and higher | ESX 4.1 | ESX 4.1 |
Ubuntu 10.04 and higher | ESX 4.1 | ESX 4.1 |
Distros using Linux version 2.6.33 or later and that include the vmw_pvscsi driver | ESX 4.1 | ESX 4.1 |
Make sure you have the paravirtualized scsi drivers installed on the guest, else it won't know how to reach it's disk.
They are available as a floppy image in the vmimages/floppies/ directory on a host.
Also make sure you meet the compatability matrix as in this KB:
http://kb.vmware.com/kb/1010398
Guest operating system | Data Disk | Boot Disk |
Windows Server 2008 and higher | ESX 4.0 | ESX 4.0 Update 1 |
Windows Server 2003 | ESX 4.0 | ESX 4.0 Update 1 |
Red Hat Enterprise Linux (RHEL) 5 | ESX 4.0 | not supported |
RHEL 6 and higher | ESX 4.0 Update 2 | ESX 4.0 Update 2 |
SUSE Linux Enterprise 11 SP1 and higher | ESX 4.1 | ESX 4.1 |
Ubuntu 10.04 and higher | ESX 4.1 | ESX 4.1 |
Distros using Linux version 2.6.33 or later and that include the vmw_pvscsi driver | ESX 4.1 | ESX 4.1 |
After changing the controller to PVSCSI we have seen a remarkable performance improvement on our Exchange 2010 servers, howvever the transaction logs voulmes are getting corrupted.
After googling around I found an artcle mentioning that the transaction logs should be on Eager-Zeroed Thick disk type.
Before I change the transaction datastore vmdk to Eager-Zeroed Thick, anyone out there who can confirm this to be the case?
Joseph
On another note, VMware support told me there is an issue with more than 5 disks on a single PVSCSI controller, so I had to add another PVSCSI controller and move couple of disks there..anybody ever come accross an issue where more than 5 disks make the transaction disk in Exchange 2010 go offline
Joseph
The adapter queue depth of the virtual LSI Logic SCSI adapter is at least 128. The LUN queue depth is 32, which is also the case for VMware ESX. Since a vmdk will show as a virtual disk with one LUN you can see that one virtual adapter can drive 128/32 = 4 LUNs before queuing in the guest will occur. So the recommendation would be to try and have at most 4 vmdks per adapter (e.g. create another virtual adapter every time you have 4 disks on the previous one) and once you hit the max number of adapters just balance out the disks on all adapters evenly (but evenly with respect to load - e.g. don't put all the most heavily used LUNs on the same adapter). I don't think there is a "supported" limit to number of vmdk's per scsci controller besides the scsi standard of 15.
Thank you,
Can you refer me to the web site where you found the information about queue depth?
Joseph
http://www.domex.com.tw/support/driver/scsi/U4_SCSI/WinNT/symmpi_nt-1.08.04/symmpint.txt --snip 2.3.2 Maximum Number of Concurrent I/Os (Guaranteed) Windows NT 4.0 guarantees a maximum of 32 concurrent I/Os active on a particular SCSI bus. However, due to the method of memory allocation, the actual limit of concurrent I/Os can vary greatly between various drivers or versions of drivers. This can have a huge impact on performance benchmarking between different driver versions or adapter vendors. In effect, one adapter may actually be able to have 70 or 80 I/Os outstanding, while another adapter could only have 32. This can also affect systems with high performance storage subsystems, such as disk arrays. In order to enable better performance, the driver installation process adds a registry entry to support 128 concurrent I/Os. If a different maximum value is desired, the file mpi100io.reg can be used to add a registry entry to guarantee the desired number of concurrent I/Os. There are two methods to add this registry setting. One is to locate the mpi100io.reg data file (supplied with the driver files) using Windows Explorer and double click on the file. The other method is to type at the command prompt: regedit mpi100io.reg This inserts an entry in the registry to guarantee a maximum of 100 concurrent I/Os per adapter. If a maximum other than 100 is desired, the mpi100io.reg can be edited; however, setting this value to a high number uses increasing amounts of non-paged pool memory, a critical NT resource. High values for this setting can degrade system performance. Be sure to read the information in the mpi100io.reg data file before editing it. The system must be rebooted for the new registry setting to be effective. To reset the guaranteed number of concurrent I/Os to the Windows NT default of 32, follow the instructions above, except use mpidefio.reg as the data file.
--end snip