StephenTJ's Posts

We began in inititive to reconfigure all our ESX hosts to utilize the round robin access policy for all our SAN datastores as a way to improve throughput. I originally wrote a powershell script u... See more...
We began in inititive to reconfigure all our ESX hosts to utilize the round robin access policy for all our SAN datastores as a way to improve throughput. I originally wrote a powershell script using PowerCLI to go from host to host (by host or by cluster) and reconfigure all detected LUNs with the round robin policy. The unfortuante problem with this method was any LUN added after this was completed would have to be changed manually as well. After attending a troubleshooting class the esxcli command was brough to my attention, after some further investigation I found it was recently integrated into PowerCLI (v332441). With this information I decided to write a PowerCLI script to reconfigure all my hosts (either by host or by cluster) to use round robin by default. All our arrays are active/active arrays, using the SATP VMW_SATP_DEFAULT_AA, my script is configured to change this specific SATP. To use this cmdlet you must have the PowerCLI build 332441, which was released in December of 2010. The script prompts for input several times but only prompts once for username/password for the hosts. I have only run this script against ESXi 4.1, please do you own testing and verification if you decide to make use of this PowerCLI script.
Sorry, I meant that as long as the RDMs are in virtual compatibility mode you can migrate other VMDK files that virtual guest might have to new storage. As far as migrating RDMs to a new SAN you ... See more...
Sorry, I meant that as long as the RDMs are in virtual compatibility mode you can migrate other VMDK files that virtual guest might have to new storage. As far as migrating RDMs to a new SAN you are correct that it would require a backup/restore system or SAN based migration tools to accomplish.
That is true that SVMotion isnt an option if you are using physical compatibilty mode for your RDMs, SVMotion should still work if the RDMs are in virtual compatibillty mode (since they will su... See more...
That is true that SVMotion isnt an option if you are using physical compatibilty mode for your RDMs, SVMotion should still work if the RDMs are in virtual compatibillty mode (since they will support snapshoting capability). This KB article addresses the concerns for RDMs and migration scenarios that you might encounter.
I would say because of the size RDMs are a better way to go, I say this mostly because I dont like the idea of entire datastores having 1 VMDK file filling them up (and I hope you don't consider ... See more...
I would say because of the size RDMs are a better way to go, I say this mostly because I dont like the idea of entire datastores having 1 VMDK file filling them up (and I hope you don't consider extents...). Also, depending on if you use virtual or physical mode RDMs you either gain the benefit of it being treated like a VMDK file (cloning, snapshoting, etc) in virtual mode or in physical mode you can use virtual w/ physical server clustering. RDMs also allow you to re-present the disk to a physical host if you need to move the server to physical at a later time. There is no risk to vMotion capability using RDMs in physical or virtual compatibility mode, excluding using some form of clustering solution (i.e MSCS). As far as increased complexity, you just need to be aware that the LUNs are used as RDMs and that all the hosts in your cluster are presented the LUNs in the same fashion. It really comes down to your preference, you can have a bunch of datastores dedicated to this one server (probably completely full and at their size limitation without extents). That, or you can have a few LUNs on each host as RDMs presented to this guest for access and have to be aware that its setup that way.
After going through 4 different HP techs trying to explain what I though was the issue I did finally get clarification from an HP tech about this. I told them that I thought the issue was that ... See more...
After going through 4 different HP techs trying to explain what I though was the issue I did finally get clarification from an HP tech about this. I told them that I thought the issue was that the 16 downlink connections did not support NPIV due to the fact that they are a 1:1 connection ratio (one server HBA per port). The tech I spoke with confirmed what I thought, the 16 downlink ports do not support NPIV, only the 4 uplink ports do. Hopefully at some point maybe a firmware update will allow for this to be supported but until then we are out of luck I think. If I get any further information about this I will post it.
The NPIV functionality is only available to guest servers with RDM disks. A virtual machine with NPIV configured just allows for that virtual machine to route all its traffic to its RDM disks thr... See more...
The NPIV functionality is only available to guest servers with RDM disks. A virtual machine with NPIV configured just allows for that virtual machine to route all its traffic to its RDM disks through its virtual port on the host's HBA. This allows for the SAN Admin to control access to a LUN on a per virtual machine bases, but the documentation says any LUN you give to a virtual machine has to also be assigned to the ESX host. I was curious if you could assign the LUN to the virtual guest on the SAN side and see if the guest saw it without using a RDM, unfortunately I cannot do any playing with NPIV as my hosts think my fabric doesn't support NPIV (probably because of HP Virtual Connect, not sure just yet).
Your HBA does not have enough resources, that is what the failure reason means. Failure reason 5: HBA does not have enough resources, Check and update the Fibre Channel HBA firmware H... See more...
Your HBA does not have enough resources, that is what the failure reason means. Failure reason 5: HBA does not have enough resources, Check and update the Fibre Channel HBA firmware Here is a good document for configuring and troubleshooting NPIV:
I suspect it depends on your switch's Fabric OS version. I run CISCO MDS 9216s and NPIV support is on SAN-OS versions 3.0 or later so I had to upgrade to v3.1. Also, NPIV had to be enabled on the... See more...
I suspect it depends on your switch's Fabric OS version. I run CISCO MDS 9216s and NPIV support is on SAN-OS versions 3.0 or later so I had to upgrade to v3.1. Also, NPIV had to be enabled on the switch after the SAN-OS was upgraded before it would work. I cannot say if its the same for Brocade's Fabric OS but its something to check on.