stainboy's Posts

Hello all, I'm having the same error. Version 5.1.2 Tried having only one nic in the hosts, both primary and recovery sites. I can setup replication from site A to B and it syncs. Fro B to A... See more...
Hello all, I'm having the same error. Version 5.1.2 Tried having only one nic in the hosts, both primary and recovery sites. I can setup replication from site A to B and it syncs. Fro B to A I get that error "storage issue datastore path...." Checked for aditional vmkernel ports that could have the "management traffic" tag but I could only find one for each host. Have any one come up with a different approach? Some lateral thinking on this? Found the error in the logs: 2014-03-20 14:00:54.197 WARN  hms [hms-jobs-scheduler-thread-1] (..hms.util.HmsLock)  | Timeout for candidate: (Owner: 18:IssueCalcProcessor.java:228:isGroupLocked,IssueCalcProcessor.java:210:isTargetObjectLocked(GID-6f74785a-d383-4233-8a67-19d73ee9f8ed), elapsed: 0 msec) The current owner is: (Owner: 17:SecondaryGroupImpl.java:4162:lockEntity,SecondaryGroupImpl.java:4176:onLastGroupErrorChanged(GID-6f74785a-d383-4233-8a67-19d73ee9f8ed), elapsed: 86 msec) Additional candidates waiting on the same lock: [] 2014-03-20 14:00:54.198 DEBUG hms.issue.IssueCalc [hms-jobs-scheduler-thread-1] (..hms.issue.IssueCalcProcessor$SingleCalculationProcessor)  | The target object HmsGroup 'GID-6f74785a-d383-4233-8a67-19d73ee9f8ed' is in use. Will try to calculate issues for it later! 2014-03-20 14:00:54.199 WARN  hms.monitor.hbr.datastore[525f51f7-46ad-21ae-40b7-ca8667324215] [hms-pcm-dispacher-thread-1] (..monitor.hbr.HbrDatastoreMonitor) operationID=9993e8a5-2630-48b2-956e-3c385f87e391 | Datastore VNX_L133_ATKU-PD01_DS002(MoRef: type = Datastore, value = datastore-83, serverGuid = null) become inaccessible. 2014-03-20 14:00:54.199 TRACE hms [hms-pcm-dispacher-thread-1] (..hms.util.EventPoster) operationID=9993e8a5-2630-48b2-956e-3c385f87e391 | POSTING EVENT:com.vmware.vcHms.datastoreInaccessibleEvent TARGET:datastore-83 Checked on th VR and ALL hosts are correctly added with correct ip addresses and all match the management interface ONLY. 21-03-2014 On the site where the VRA should access the hosts, to access the Datastores, there are certificates erroros, found in the VRA logs. Those errors are preventing the VRA to access the datastores and as such, giving inaccessible events. Thx, Carlos
Yes, go ahead and storage vmotion the disks you want. Your OS will never tell the difference. I've done that a lot of times. Migrate, select the disks you want and chose a different datastore wit... See more...
Yes, go ahead and storage vmotion the disks you want. Your OS will never tell the difference. I've done that a lot of times. Migrate, select the disks you want and chose a different datastore with free space. For ESXi as long as it has access to the vmdk it will be OK. For the OS in the VM is transparent, you'll keep seeing the same thing (a F: or G: drive)
hmmmm. I don't think that is a "real" active/active array. You have two contollers but I think each owns it's own LUN's... So when you created the LUN's you did it on each controller in separate ... See more...
hmmmm. I don't think that is a "real" active/active array. You have two contollers but I think each owns it's own LUN's... So when you created the LUN's you did it on each controller in separate right? the 2040 worked that way and only high end array are tru active active like a vmax. So... I think you have ALUA but it's not avaliable for iscsi as far as I remember. Without it, the only way SP1 would pick up the LUNs on SP2 would be on a failover. If thats the case, make sure you divided the storage on the two controllers
nevermind that.
your netapp is active/active or active/passive with ALUA ??
sorry. same network.... no routing in bettween. so a red state tells you something is not correct. The MPIO is the settings for the multipathing. Makes no scense calling it MPIO in my opinion bu... See more...
sorry. same network.... no routing in bettween. so a red state tells you something is not correct. The MPIO is the settings for the multipathing. Makes no scense calling it MPIO in my opinion but thats another discussion. So go through the details and check for the red lines that show you exactly where the settings were not applied and could point you in the direction of your problem. You can check that by going to the same place where you apply the settings and the is somethings like "details". It is a LONG list of ALL settings so it can be tedious but in many cases worth the time. It will show you in red which settings failed to apply. In the mean while, I'll try to remember what I did to mitigate that problem. But check it out.
oh and another thing... everything is on the sabe network right? no L3 in the middle...
No. It shows HW iSCSI but it is not. Try to configure it on vCenter and you'll see. Just be sure to follow the instalation of the netapp plugin, the VSC and apply the settings, reboot and try.
Ok, you could just run a command to do it to all your luns but that is OK. Next thing I would do is to install the netapp plugin in vcenter and apply all recommended settings. If I remember, MPIO... See more...
Ok, you could just run a command to do it to all your luns but that is OK. Next thing I would do is to install the netapp plugin in vcenter and apply all recommended settings. If I remember, MPIO, NFS and another one I don't remember but it is important. Just go apply ALL recomended settings. It will change queu deth and some more. Thats is for sure the NepApp recommended settings so start there. I had a similar issue with a FAS 2240 with iscsi and vmknic binding for multipathing. When I changed to RR.... huge latencies, a lot of problems. Apply the settings reboot.
Hi. The RR was there by default or did you changed it? How did you do it? On CLI so that every new added LUN would be RR or manually on the vCenter??
Hi. Shared storage (all of your datastores) with BOTH clusters (all hosts on both) With shared storage bettween clusters just power down (because of the cpu compat) and migrate them to anothe... See more...
Hi. Shared storage (all of your datastores) with BOTH clusters (all hosts on both) With shared storage bettween clusters just power down (because of the cpu compat) and migrate them to another host on another cluster. Yes you can unregister and register in the other host/cluster also, a bit more work in my opinion...
Have you tried checking if it spaced out when you first created it. Because if anything remais there it will give you an error.
Hi. Possible? Yes, it will "come outside", meaning that it will use the physical switches/routers/etc. To keep it simple just use the dVswitch and then you can configure specific network configur... See more...
Hi. Possible? Yes, it will "come outside", meaning that it will use the physical switches/routers/etc. To keep it simple just use the dVswitch and then you can configure specific network configurations for both vm's like private vlans etc.
What you can do is to have 2 protected sites failing over to 1 recovery site. OR using storage replication, have a protected site failover to another site that has replication in place to a third... See more...
What you can do is to have 2 protected sites failing over to 1 recovery site. OR using storage replication, have a protected site failover to another site that has replication in place to a third site. Neither will cover what you are looking for.
Just remember like in a windows MSCS with RDM's, if your Linux is "reservating" those LUNS you migth end up with problems when using RR...