Anyone tried this yet? It's not on the HCL yet, but I reeeeeeaally hope it will be. We just got this SAN last year, and getting a different one would be a hard sell right now.
Well it looks as though the VMware site has been updated today...with good news!
Release | Mode | Path Policy | Firmware | Device Driver(s) | Configuration |
ESX 4.0 U1 | VMW_SATP_ALUA | VMW_PSP_MRU | J210P19 | qla4xxx version 5.01.00.vm1 | HW iSCSI |
ESX 4.0 U1 | VMW_SATP_ALUA | VMW_PSP_MRU | J210P19 | N/A | SW iSCSI |
ESX 4.0 | VMW_SATP_ALUA | VMW_PSP_MRU | J210P19 | N/A | SW iSCSI |
ESX 4.01 | VMW_SATP_ALUA | VMW_PSP_MRU | J210P19 | qla4xxx version 5.01.00.vm1 | HW iSCSI |
ESX 3.5 U5 | Active/Active | Fixed | J210R10 | N/A | SW iSCSI |
ESX 3.5 U4 | Active/Active | Fixed | J210R10 | N/A | SW iSCSI |
What's the difference between the p19 vs. the r10??
Thank you, Tom
Finally, i was beginning to think HP were lying about supporting this.
Thats handy, my MSA2012i systems are already running J210P19-02 which is rated as critical, the latest is J210P22-01 which is recommened.
These are the details for the firmware upgrades:
Version: J210P22–01\ (16 Feb 2010)
Upgrade Requirement:
Recommended - HP recommends users update to this version at their earliest convenience.
Fixes:
When SNMP is used, fixed an issue that not all values were set to the current community string when it was changed from PUBLIC.
Improved the handling of a partner controller’s interconnect so that an erroneous PICe link failure is not reported and the controller is not inappropriately shutdown.
Fixed a problem where the MC stopped responding when deleting a spare from a vdisk.
Fixed controller failover behavior in Active/Active configurations. If one of the controllers stops communicating with its on-board SAS expander, the controller will now fail.
Fixed an issue where a power supply failure went undetected by the array. If a failure of the first power supply went undetected and the second power supply subsequently failed, the entire array abruptly powered off.
Fixed a timing problem where a super-capacitor failure was erroneously reported during start up.
Fixed inconsistent quarantine behavior and automatic health verification with parity error check on RAID 10 vdisks with missing members.
Fixed inconsistent behavior when moving disk drives from one disk enclosure to another.
A vdisk will no longer be quarantined if the vdisk is offline due to drives marked as failed.
Fixed inconsistent location information about power supplies in log entries.
Fixed an issue where both controllers shutdown with an OSMEnterDebugger.
Corrected an issue where reconstruction of a vdisk appeared to be stopped at 0%.
Prevent premature rescan following a reset.
Correct display of the Expander Status in the logs.
Prevent data about a deleted vdisk being orphaned in controller cache during a firmware update.
If there are minor errors on multiple disk drives in a vdisk, the behavior will allow reconstruct to continue.
Removed a critical notification message after firmware upgrades and disk scrubs. This was a message used during product development and testing that is misleading and is not needed in customer installation.
Removed erroneous display of "Unknown event (234)".
Corrected available free space values displayed for a vdisk.
In the SMU, in the "Volume Mapping" page, fixed volume mapping information displayed for a LUN.
In the CLI, for the show enclosure-status command, fixed erroneous marking of the power supply as Absent.
In the CLI, for the show version command, fixed the occasional display of a blank page.
Corrected an issue where iSCSI port speed was reported incorrectly.
Enhancements:
Enhanced controller replacement logic. Configuration data is no longer reset to
manufacturer’s default settings when controller A is replaced.
Improved the Disk Firmware Update procedure so that it stops processing further
drives if an error is encountered while updating.
Enhanced performance of front panel fault LED. In dual I/O module configurations,
this LED will now illuminate when one of the two I/O modules becomes
unresponsive.
Removed unnecessary check of SNMP version to stop reporting unhelpful messages to error logs.
Enhanced the models of disk drives supported by the array.
Improved failover time in certain configurations.
Improved event log reporting to include older entries from the event buffer in the log.
Added vdisk expansion information to bootup logs.
Increased temperature threshold tolerance of internal components on the backplane.
Improved snapshot performance for some configurations.
Version: J210P19–02 (11 Feb 2010)
Upgrade Requirement:
Critical - HP requires users update to this version immediately.
Fixes:
Fixed an issue where the expander status was reporting incorrectly.
Fixed an issue where the logs from controller A and controller B were
not the same.
Fixed an issue where a rebuild was not started even when a global spare
was available.Fixed an issue where a write cache error during Vdisk
creation would lead to a parity scrub error.
Fixed issues for CIM and SMI-S involving certain classes and
associations to be correct and appropriately populated.
Fixed an issue where some unattached snapshots could not be deleted.
Fixed an issue to include the snap pool threshold values being
incorrectly reported.
Fixed issues that resulted in PCIE link failures, debug exceptions, or
OSMEnterDubber errors causing a controller to shutdown.
Enhancements:
Added the ability to change the public and private SNMP Community names
Enhanced the default behavior of the disk scrub utility so that it does not run
constantly.
P19 is 'firmware,' P22 is 'firmware -- storage controller.'
A slight difference.
What is the proper path selection to use -- MRU or round robin??
I'm presently leaving its default of MRU until I find/learn something better.
Thank you, Tom
This is off the topic of the original post, but thought this would be the best place to post this question.
We have two hosts attached to our MSA SAN, with multipathing setup through 2 x Procurve switches.
I can see that there are 4 x paths listed under the Software iSCSI Storage Adapter, and I can see the virtual disks - all good there.
But when I try to ping all four of the Host IP Addresses from the ESX console, I only get replies back from two of them?
Is this just the way the MSA works? Or should I get a response from all of the SAN Controller Ports?
Our SAN IP addresses are:
A0 - 10.0.0.1 (reply)
A1 - 10.0.0.2
B0 - 10.0.0.3 (reply)
B1 - 10.0.0.4
Can anyone shed some light on this?