I have Menumeters installed on the Mac side, and DriveGleam on the Windows side (great little program that hardly uses any resources but puts disk activity, memory usage, and cpu meter right there in the taskbar).
With no virtual machine booted, the Mac idles at around 3% on both cores. (It's a 2.33 GHz machine.)
When the Windows machine is booted and idling at around 0-5 % utilizing one core, the Mac runs at around 25-30% on both cores.
When the Windows machine is IO bound at 100%, the Mac shows 50-60% again on both cores, though often higher on one of the cores, sometimes approaching 80-85 (it depends, it seems, on what source of process is monopolizing the CPU).
How to explain this? I speculated (1) the virtual machine requires lots of CPU cycles from the Mac even when it's not doing anything, hence the 30% utilization when the Windows machine is doing nothing, and (2) since Fusion is virtualizing the CPU, the CPU cycles that the Windows machine gobbles up doesn't show up on the Mac side as CPU cycles used - Menumeters is grabbing the CPU cycles used further down the line so it doesn't see them. Meanwhile, emulating hardware does require CPU time that shows up on the Mac side, and furthermore, more processing is required for the emulation when the virtualized CPU is doing more work. So even when at 100% on the Windows side, that extra CPU work isn't showing up on the Mac side, but a lot of extra CPU cycles are being used, for the hardware emulation, including, it seems, on the virtualized core.
I still don't understand why there is "slack" showing on both cores on the Mac side when there is NO slack on the Windows side (on one of the cores), i.e. it is at 100%. The system information on the Windows side correctly reports a single core T7600 @ 2.33 GHz, yet it seems the OS's access to the CPU is throttled back somehow, right there in the virtualization.
Or the CPU meters I am using are not at all accurate. Or my reasoning here is way off.
On a side note, I compared the performance of my Macbook Pro (2.33 GHz) virtualizing Vista on one core to my Thinkpad X61 tablet (1.6 GHz dual core) running Vista natively. The task was to process and convert into text a 25 page scanned pdf document using ABBY Finereader Pro (an example of the sorts of tasks I'm interested in performing on both machines) - a lengthy task that would normally require about 10 minutes. The performance of the virtual machine was impressive - it took roughly 115% of the time it took the X61 (slower processor but dual core) to perform the same task natively. (I've since thrown away the Vista VM for a less buggy and resource intensive OS). Not a scientific test, but it gives me an indication of what sort of performance to expect, and I wasn't disappointed!
What I've seen as the largest factors driving host CPU usage is not necessarily guest CPU directly but guest USB IO and guest timer functionality. You will find a few postings here about issues with either QuickTime or iTunes background timers, and even running something like Task Manager in the guest (which has a refresh timer), loads the host CPU. A guest Task Manager may register over 95% "idle" but the host CPU can be at over 20%. Also just having a dormant USB device bridged to a host device such as a printer (while not printing), can add 10-20% host CPU.
You can definitely kill known timer offenders, disconnect USB devices, even possibly disconnect the network and your host CPU should drop below 10%. I generally only need keep network connected and nothing else like no 3D acceleration either. This keeps my host CPU reasonable.
In certain instances you can not control the guest OS use of timers such as in some Linux distributions with high kernel timers. Well you can, however recompile the kernel reducing timer intervals but I'm not sure what effect that has on guest timing overall.