Highlighted
Enthusiast
Enthusiast

HotAddMemory DeployVM0 fails

Jump to solution

Hi,

I successfully ran test with 8 tiles.

When i try a test with 9 tiles, it fails while trying to hot add memory to one of DeployVMs. It manages to do it 7-8 times but eventually fails, with the following error:

2020-05-14T13:54:38.785 [pool-3-thread-6] ERROR ST121 : Error: VM DeployVM0 : Unable to HotAddMemory to use 16384 MB

2020-05-14T13:54:38.788 [pool-3-thread-6] ERROR MAIN : Exception Caught: Unable to HotAddMemoryDeployVM0 to use new Memory SettingMB: 16384 : exitSetting true

Unable to HotAddMemoryDeployVM0 to use new Memory Setting :: Error Detail (if applicable): MB: 16384

vCenter logs are rather curious:

--> Result:

--> (vim.fault.InsufficientMemoryResourcesFault) {

-->    faultCause = (vmodl.MethodFault) null,

-->    faultMessage = <unset>,

-->    unreserved = 2376810364928,

-->    requested = 1125899906318336

-->    msg = ""

--> }

2376810364928 (2.3TB) is the cluster memory capacity. 1125899906318336 is almost 1PB.

Any ideas what could be the reason for this failure?

Kind regards,

Vladimir

Tags (1)
0 Kudos
1 Solution

Accepted Solutions
Highlighted
VMware Employee
VMware Employee

I do not see any obvious errors.  One point I notice you have HA enabled?  Maybe try disabling it.  Try running again with Reporter = true and upload the entire result directory if it fails.

View solution in original post

0 Kudos
8 Replies
Highlighted
VMware Employee
VMware Employee

Can you post your STAX log files from the result and vmmark3.properties file?

0 Kudos
Highlighted
Enthusiast
Enthusiast

Hi James,

I've attached VMmark3.properties and STAX log files.

Regards,

Vladimir

0 Kudos
Highlighted
VMware Employee
VMware Employee

I don't think this is the issue, but the Deploy/DeployVMinfo and Deploy/Templates variable in your VMmark3.properties file only needs the first 2 entries since you have 4 hosts. (you need 1 entry for ever 2 hosts).

Can you check the destination datastore (DeployLUNs) to make sure there is sufficient space available to create the swap file?

There might be a more useful error message in the vmware.log file of the failed DeployVM, this file would be located in the Deploy VM folder of the datastore it was deployed to.

0 Kudos
Highlighted
Enthusiast
Enthusiast

There's 49.96TB out of 50TB free on the datastore. DeployVM1 is the only VM on it.

Also, there's nothing related to hotaddmemory task in the vmware.log:

2020-05-21T12:20:17.648Z| vcpu-3| I125: Guest MSR write (0x49: 0x1)

2020-05-21T12:20:24.227Z| vcpu-1| I125: HBACommon: First write on scsi0:0.fileName='/vmfs/volumes/5eb2cff9-c041ed54-25fa-246e96adcee4/DeployVM1/DeployVM1-000001.vmdk'

2020-05-21T12:20:24.227Z| vcpu-1| I125: DDB: "longContentID" = "2ac515eb7465c8ea123488d756b080fc" (was "321b1eaff13fe449ed6b2ec8a87149cd")

2020-05-21T12:20:24.266Z| vcpu-1| I125: DISKLIB-CHAIN : DiskChainUpdateContentID: old=0xa87149cd, new=0x56b080fc (2ac515eb7465c8ea123488d756b080fc)

2020-05-23T16:35:39.376Z| vmx| I125: MKSVMX: Vigor requested a screenshot

I don't think that request passed vcenter in the first place.

Regards,

Vladimir

0 Kudos
Highlighted
VMware Employee
VMware Employee

Could you manually run the reporter scripts as explained in the user guide "Manually Running the VMmark Reporter Scripts" pg 99?

You can upload the results to

http://ftpsite.vmware.com

user: inbound

password: inbound

Click New Directory - F7 and choose a new directory name. Click OK.

Click Change Directory and input your new directory name. Click Change.

Click Add to select your zipped files, then click Upload.

then post the name of the files that you uploaded and the md5sums.

Alternatively you can rerun with Reporter = true set in the properties file, however it might crash before the reporter is run.

0 Kudos
Highlighted
Enthusiast
Enthusiast

esx108-20200524-114183.tgz 4bff441b47cfc665260570df30046e35

vcsupport-20200524-100041.tgz 9b98686933aed00ae5cb0c0271f1cff1

Regards,

Vladimir

0 Kudos
Highlighted
VMware Employee
VMware Employee

I do not see any obvious errors.  One point I notice you have HA enabled?  Maybe try disabling it.  Try running again with Reporter = true and upload the entire result directory if it fails.

View solution in original post

0 Kudos
Highlighted
Enthusiast
Enthusiast

Disabling HA on the cluster helped.

Thank you for your help jamesz08.

Regards,

Vladimir

0 Kudos