VMware Cloud Community
mrcreosote
Contributor
Contributor

CPU Usage of software iSCSI initiator

Hi.

We are considering using the software iSCSI initiator over 10 Gig Ethernet. My concern is host CPU usage. Does anyone have any experience of the CPU overhead this could place on the host?

Also is this overhead tied to CPU0 (as I believe the Service Console is)? If it is, could using 10 Gig saturate CPU0 and affect SC performance?

Thanks for your help.

FB

Tags (2)
0 Kudos
5 Replies
BUGCHK
Commander
Commander

The software iSCSI initiator runs inside the VMkernel using its IP stack, the Service Console is used just for authentication.

0 Kudos
Paul_Lalonde
Commander
Commander

I don't have any direct experience with 10GigE and software iSCSI on ESX, but I do know that the VMkernel iSCSI stack is multithreaded across all available PCPUs in the host. The service console portion (CPU0) only involves iSCSI login and authentication with the target, but doesn't handle the ongoing I/O.

I'd be curious to know the effect on physical CPUs, myself.

Paul

RParker
Immortal
Immortal

The CPU is higher than that of other vmkernel processes, but it's not really too bad. It depends on load, but it's about 20% overhead. I doubt you can take advantage of a 10g iSCSI using the software initiator anyway. Even with a iSCSI HBA you still can't achieve the speeds (that's a ton of bandwidth) you will run out of IO on local disk long before your iSCSI runs out of bandwidth.

If you do this long term you may want to consider a hardware iSCSI solution rather than software.

0 Kudos
mrcreosote
Contributor
Contributor

Probably me being stupid, but why do mention IO on local disk; typically wouldn't you expect iSCSI to be used to connect to a centralised disk pool (ie SAN/NAS)? Does the sw initiator generate IO on disks on which ESX is installed?

Sadly there are (currently?) no 10g toe cards available (or at least none on the IO HCL) so if we want to leverage 10g investment, then sw initiator may be only way forward. Main concern with 1g toe cards is saturating card on 4 socket boxes (esx currently supports only 2 per instance?) with high (>40) VM count. Guess other way forward may be to double host count and use 2-way boxes?

Thanks for your time.

0 Kudos
RParker
Immortal
Immortal

Probably me being stupid, but why do mention IO on local disk; typically wouldn't you expect iSCSI to be used to connect to a centralised disk pool (ie SAN/NAS)? Does the sw initiator generate IO on disks on which ESX is installed?

No just merely pointing out that disk is the bottleneck, not the network... Even with Gig Network it's difficult to sustain transfer speeds to fill the DISK IO, it doesn't matter how many spindles, how fast the back plane or the type of disks. Even current disk speeds cannot keep up with network speeds.. Maybe in a test environment, but not when you start using the disks on a daily basis.

0 Kudos