I am required to export a VM from a .vmx file to an OVF/OVA file for distribution to our customer. The VM is configured with two sockets and two cores per CPU, for a total of four logical cores (2x2). When the customer then re-imports from an OVA to a VMX, the configuration changes to four sockets with one core per CPU (4x1). The guest OS is Windows 10 x86_64 Pro, which does not support four sockets.
How can I ensure that the two socket, two core setting survives the translation to and from OVF?
VMX > OVF oversimplifies the information in the vmx-file.
OVF > VMX does the same again.
If you want to maintain the information in the vmx-file - do not use this approach.
Just zip the vmx-file plus the associated vmdks + nvram - then you have full control over the result.
Just tried your scenario myself - and apparently Workstation OVF-files do not use number-of-cores + number-of-cpus-per-core parameters and instead only use
<Item>
<rasd:AllocationUnits>hertz * 10^6</rasd:AllocationUnits>
<rasd:Description>Number of Virtual CPUs</rasd:Description>
<rasd:ElementName>4 virtual CPU(s)</rasd:ElementName>
<rasd:InstanceID>1</rasd:InstanceID>
<rasd:ResourceType>3</rasd:ResourceType>
<rasd:VirtualQuantity>4</rasd:VirtualQuantity>
</Item>
Ulli
Converting to OVF is a requirement of our customer deliverable. We could add some post-processing to the OVF once it is created. Is there any OVF setting that would allow separation of physical and virtual CPUs?
