kfriendii's Posts

After chasing the performance gremlins for several weeks, I've made some changes and I'm running LiveOptics against my environment and I noticed something particularly interesting that I'm hoping... See more...
After chasing the performance gremlins for several weeks, I've made some changes and I'm running LiveOptics against my environment and I noticed something particularly interesting that I'm hoping someone can explain to me.  I don't expect this to be especially relevant but in the event that it is, here are some details on the environment: (3) Node Cluster running Essentials Plus with ALL H/A turned off. (1) EMC VNX 5200 array w/ 32 900GB 10K disks and 9 2TB 7200 disks. 10GB ISCSI Software HBA (10) Datastores of varying sizes, some with deduplication turned on the array, some on SPA, others on SPB. Some of my datastores host multiple guests, while others are dedicated to a single guest.  In this particular case, a file server is on node_1 and its' storage is on VMFS_05. Reviewing Liveoptic data on IOPS shows expected outputs for node_1.vmfs_05.  But when I look at node_2.vmfs_05 and node_3.vmfs_05, I see something interesting...IOPS where I would expect there to be NONE originating from the nodes that are not hosting the server.  Makes sense? Is there overhead here that I'm not aware of with ISCSI or maybe between VMWare nodes?  I do see a folder called .VSphere-HA on the datastore as well as .sdd.sf but I have HA turned off currently...so why are nodes that have no business "talking" to VMFS_05, doing I/O against it? Curiousity more than anything.
Hello Team, Inherited a 3 node "cluster" that connects to Dell/EMC VX5200 storage via 1GB link ISCSI for block.  The SP's were upgraded to 10GB interfaces on Thursday and I had a heck of a tim... See more...
Hello Team, Inherited a 3 node "cluster" that connects to Dell/EMC VX5200 storage via 1GB link ISCSI for block.  The SP's were upgraded to 10GB interfaces on Thursday and I had a heck of a time trying to get vmware / vcenter chatting over the 10GB pipe...in fact I haven't figured it out yet.  Hoping there is someone out there who has more experience with this than I do and can lend a hand. Storage and esxi hosts are on dedicated and isolated 10GB switch (not reachable from esxi management 10.11.10.200) vnx5200                1GB                spA.00 = 10.20.1.80                spA.01 = 10.20.1.81                spB.00 = 10.20.1.82                spB.01 = 10.20.1.83                10GB                spA.00 = 10.10.91.2                spA.01 = 10.10.91.3                spB.00 = 10.10.91.4                spB.01 = 10.10.91.5 esxi                10GB               node1.00 = 10.10.91.11               node1.01 = 10.10.91.12               1GB               node1.00 = 10.20.1.20               node1.01 = 10.20.1.21              node1.MANAGEMENT = 10.11.10.200 On Thursday after assigning addresses to the new 10GB adapters on the VNX, I vacated node1, unbound 10.20.1.20/10.20.1.21 from storage adapters, deleted the virtual switch that the ISCSI VMKERNELS were attached to, deleted the VMKernel adapters for 1GB.  I recreated the VMKernels using the 10GB adapters, attached to virtual switch and bound to storage adapter.  Storage adapter showed "happy" as active.  ODDLY, however; THE 10.20.1.80, 10.20.1.81, 10.20.1.82, 10.20.1.83 ADDRESSES CONTINUED TO SHOW IN STATIC DISCOVERY (despite deleting them manually and there being no "path" to them through the assigned storage adapter) -- confusing, not understanding what is happening here or why...unless VMWare is going out over the management address of 10.11.10.200 of the ESXI host and trying to reach out to 10.20.1.80/81/82/83. RESCAN ADAPTER + RESCAN STORAGE.  Only LUN0 appears...not seeing any of my datastores via 10GB. ON THE EMC SIDE Node1 shows as "partially connected".  When I go to view the host under Unisphere I get a message stating that the host is a member of 2 system groups and this isn't recommended.  Entering into maintenance mode in Unisphere I see an automagically created  system group called ~management that node1 is a member of.  I tried to remove node1 from ~management to no avail. It's almost as if I need to "bind" the luns on the EMC side to 10GB but I'm honestly lost.  It appears that the LUNS attach to SP's and the virtual addresses associated with the EMC adapters falls under the SP (seems as though the LUNS should be presented on any of the virtual addresses). Any help / direction is greatly appreciated.