VMware Cloud Community
mikelundy
Contributor
Contributor

High Disk Latency

See attachment. This is on a Dell PowerEdge 1955, connected to an iSCSI Falconstor SAN. There is also another Poweredge 1955 connected to the same vmfs volume, however it is connected via Fibre Channel which does not experience this problem. There is approx 25 VM's between the two servers all on one vmfs. Am I overloading that filesystem?

Help is greatly appreciated!

Thanks!

0 Kudos
5 Replies
mcowger
Immortal
Immortal

40000 ms is exactly 40 seconds - this definitly feels like a configuration error - recommend working with falconstor.






--Matt

VCP, vExpert, Unix Geek

--Matt VCDX #52 blog.cowger.us
0 Kudos
athlon_crazy
Virtuoso
Virtuoso

You should check this two in performance chart for every lun :

1.Storage Overload -> Stop Disk Command > 0 = Storage overload

Storage overload can caused time out and some previous commands issue can be aborted and lead to performance issue.

2.Slow Storage > physical device read latency, physical device write latency > 10ms = Storage slow

You should check this to verify whether your storage response time is slow or not from the time I/O operation submitted till replied back to ESX.

vcbMC-1.0.6 Beta

vcbMC-1.0.7 Lite

http://www.no-x.org
0 Kudos
_bC_
Contributor
Contributor

Did anyone solve this?

I am experience the very same thing,

I figured out what I were overloading the SAN (96% util..), but I dont have any ide what is cousing it!

I reach the 40 000 ms (39 999 to be exact) barrier on diffrent machines (ESX servers), and on diffrent LUNs!

However, three new configurations are made, I have added a new service console net/switch,

I have said "Yes, you can" to use the built-in "server backup" in windows 2008 r2

(limited to two servers and with very low activity, around 3 Gb each backup/snapshot).

And I installed the AcronisESXAppliance for physical to virtual backup, currently not in use only installed, only installed as an agent on one ESX (and my problem occurs on diffrent ESX-Servers and SAN/LUNs).

I am running:

ESX 4.0

Supermicro server (mainly X8DTN)

Equalogic SANs (mainly P1000e)

dedicated Qlogic HBAs (QL4050c)

No SAN-SAN replication and no SAN snaphots

0 Kudos
DyJohnnY
Enthusiast
Enthusiast

Hi,

I ran into the exact same thing...

check my thread - kind answered it myself in my case.

http://communities.vmware.com/thread/295953?tstart=0

Total latency = device latency  + kernel latency

In my case kernel latency was too high. As it turns out the hosts were trying to connect to a dead LUN/path.

Do a rescan of all storage adapters and see if that helps.

ionut

IonutN
0 Kudos
_bC_
Contributor
Contributor

Thanks!

You pointed out the rigth direction for me! And when I knew what to look for it it was obviuos...

It was coused by a dead lun (all latency came from "kernel latency").

The scary part is that vCenter -> Hosts -> configuration -> storage -> device, even after an refresh, reported "status:normal" therefor I didnt go down any further on that road... Lesson learned.

// Bjorn

0 Kudos