VMware Cloud Community
Czernobog
Expert
Expert

vRA 8.1 - add multiple disks to vm via blueprint - array iteration issue with count.index

I am having an issue with adding multiple disks do a vm in a blueprint. What I want to achieve is, that a user can add mutliple disks of various sizes, when requesting a new vsphere vm. I am almost there, however all of the disks added have the same size. I suspect the issue lies in the iteration of the disk configuration, whis is input as an array. Here are the relevant code snippets.

Disk configuration as input:

formatVersion: 1

inputs:

    disks:

    type: array

    title: Add disk

    description: Add new disks

    minItems: 0

    maxItems: 12

    items:

      type: object

      properties:

        size:

          type: integer

          title: Size in GB

          maxSize: 2048

          minSize: 1

Disk resource:

  Cloud_vSphere_Disk_1:

    type: Cloud.vSphere.Disk

    properties:

      count: '${length(input.disks)}'

      capacityGb: '${input.disks[count.index].size}'

VM resource:

  Cloud_vSphere_Machine_1:

    type: Cloud.vSphere.Machine

    properties:

      attachedDisks: '${map_to_object(resource.Cloud_vSphere_Disk_1[*].id, "source")}'

When a vm is deployed, the user can add an array of disks sizes. The number of disks added to the vm is correct as a result, but the size of all of them is equal to the size of the first disk added, example: user adds 3 disks in sizes 1GB, 2GB, 3GB. The resulting vm has the boot disk and 3 additional disks, all of size 1GB.

The blueprint documentation sadly only provides information about creating a new array, not iterating over value of an existing one. In most examples only the first element of an array is loaded.

I assume the issue is in Cloud_vSphere_Disk_1, where the size is improperly read from the array by index.count.

Could anyone give me a tip on how to resolve the issue?

edit: a word

edit2: added count.index to title

Tags (2)
26 Replies
mmonkman
Enthusiast
Enthusiast

How about calling the array element "size" from the disk input; 

  disks:

    type: array
    title: Disks
    items:
      type: object
      properties:
        DiskNo:
          type: integer
          title: Disk Number
          enum:
            - 2
            - 3
            - 4
        size:
          type: number
          title: Size GB
          minimum: 5
          maximum: 15

----

  Cloud_vSphere_Disk_2:
   type: Cloud.vSphere.Disk
   properties:

      capacityGb: '${input.disks[2].size}'
  Cloud_vSphere_Disk_3:
    type: Cloud.vSphere.Disk
    properties:
      capacityGb: '${input.disks[1].size}'
  Cloud_vSphere_Disk_4:
    type: Cloud.vSphere.Disk
    properties:
      capacityGb: '${input.disks[0].size}'

---

      attachedDisks:

        - source: '${resource.Cloud_vSphere_Disk_2.id}'

        - source: '${resource.Cloud_vSphere_Disk_3.id}'

        - source: '${resource.Cloud_vSphere_Disk_4.id}'

I'm testing this at the moment and my deployed vsphere machine has the correct disk sizes.

If 3 disks are not required I'll delete them post provisioning.

 

 

  disks:
    type: array
    title: Disks
    items:
      type: object
      properties:
        DiskNo:
          type: integer
          title: Disk Number
          enum:
            - 2
            - 3
            - 4
        size:
          type: number
          title: Size GB
          minimum: 5
          maximum: 15
Reply
0 Kudos
Czernobog
Expert
Expert

Thanks, I guess this works, however this is not what I am looking for. My users can add anywhere from 1 to 20 separate disks in their use cases, and the solution with pre-defining each disk is not really manageable.

The problem has its root in the array iteration, I cannot figure out how to do this, in - capacityGb: '${input.disks[count.index].size}' - count.index is always = 0.

edit: tried a bit more and added a new property to the Disk resource: name: 'disk-${count.index}' to get a visual output of the property value. Regardless of how many disks were attached, the names always started with disk-0-..., so the index value is not incremented at all.

Reply
0 Kudos
j_dubs
Enthusiast
Enthusiast

We have wrecked our brains over this same thing the past couple weeks.  tl;dr: count.index doesn't work on volumes (as far as we can tell.)

Not ideal, but our compromise was to basically do the something like the following:

-take an input array diskGrid with all the disk properties as defined by the requestor

-provision and attach each disk in the array as a separate volume resource with count=1 if the disk should be present.  If the disk should not be present, then count=0 will skip it.

Note, we just limited the input disks to 5 max for now, but you could go further.  All of this is because as you say - the count.index does NOT work on volumes - it is always 0 which is useless to our use case.

1. Allow the array on the inputs that will capture the size of each disk and other parameters.  This example is basic and doesn't use the tier or anything else other than the size (yet..)  Again, set your maxItems to the number of volumes you are willing to plot within the blueprint.

inputs:

   diskGrid:

    type: array

    description: Configuration of additional volumes to add to the VM.

    title: Disks

    items:

      type: object

      properties:

        tier:

          type: string

          title: Tier

          description: The storage tier of the volume.

          enum:

            - Gold

            - Silver

            - Bronze

        size:

          type: number

          title: Size GB

          description: The size of the volume in GB.

          minimum: 1

          maximum: 1024

        mount:

          type: string

          title: Mountpoint

          description: The mountpoint of the volume within the OS

    maxItems: 5

2. Create the Cloud.Machine resource, and use the following format for attachedDisks (keep going for the how many you are attaching..)  After much reading of the relnotes, and other forum articles - you will need the following format for attaching the disks to the machine.

  Cloud_machine:

    type: Cloud.Machine

    properties:

          <all your normal properties...>

          attachedDisks: '${map_to_object(resource.Cloud_Volume_1[*].id + resource.Cloud_Volume_2[*].id + resource.Cloud_Volume_3[*].id + resource.Cloud_Volume_4[*].id + resource.Cloud_Volume_5[*].id, "source")}'

3. Create multiple Volume resources and use the length of the input array to decide if you should provision the volume or not.  For instance, if your input array is only 2 items, then volume 3 will not be provisioned (it will get a count=0)

  Cloud_Volume_1:

    type: Cloud.Volume

    properties:

      constraints:

         <your constraints..>

      capacityGb: '${input.diskGrid[0].size}'

      name: '${input.hostname+"_disk_1"}'

      count: '${ length(input.diskGrid) >= 1 ? 1 : 0 }'

  Cloud_Volume_2:

    type: Cloud.Volume

    dependsOn:

      - Cloud_Volume_1

    properties:

      constraints:

        <your constraints>

      capacityGb: '${input.diskGrid[1].size}'

      name: '${input.hostname + "_disk_2"}'

      count: '${ length(input.diskGrid) >= 2 ? 1 : 0 }'

  Cloud_Volume_3:

    type: Cloud.Volume

    dependsOn:

      - Cloud_Volume_2

    properties:

      constraints:

        <your constraints>

      capacityGb: '${input.diskGrid[2].size}'

      name: '${input.hostname+"_disk_3"}'

      count: '${ length(input.diskGrid) >= 3 ? 1 : 0 }'

At the end of the day, this methodology is working for our testing within vSphere and Azure.

It's not elegant by any means - but we just couldn't get it working for the life of us to use a single Volume resource with a count=size of array, and then access the elements of the array within the count.index to get the right sizes.

Czernobog
Expert
Expert

Thanks! I gave up on trying further with getting it to work today and opened an SR, so that support can confirm the odd behavior of count.index. I've also checked with the terraform configuration (where the blueprint configuration is pushed into anyway), to check if the syntax is valid.

I also plan to do something similar with adding multiple network adapters, where each can be connected to a different network, optionally.

If the issue persist, I think another valid workaround would be to add additional components like disks or nics via a vRO action, to which inputs are passed by fields in a custom form, attached to the content (catalog item). However I do not know if I can freely update the vm configuration afterward, would have to test this scenario.

Reply
0 Kudos
mmonkman
Enthusiast
Enthusiast

Great info being offered up here.  I'm clearly just getting a handle on vra 8.1 and IAC in general so very interested in how this thread develops.

I did spot this on the 8.1 release notes;

vRealize Automation 8.1 Release Notes

Document limitation/workaround for cost estimation using multiple disks(if using count property in blueprint)

Currently, Day 0 provisioning of disks with count property is broken as the blueprint UI doesn't generate new syntax for the attached disk in yaml format. As a result, one of the mandatory property of disk cost estimation, i.e. vcUuid, is null and prevents costing estimation for the catalog item.

Workaround: Manually update the syntax of the blueprint in yaml if using count property for disks:

attachedDisks: ‘${map_by(resource.Cloud_Volume_1.id, id =>  {“source”:id} )}’
Reply
0 Kudos
Czernobog
Expert
Expert

This did not change anything in my case. I guess that, if count.index does not increment in the disk resource, changing the disk assignment in the machine resource will not have any effect. Or maybe my syntax is wrong, but I have not found evidence of this to be the case.

Reply
0 Kudos
Czernobog
Expert
Expert

I had an SR open for this issue, which was acknowledged. However according to Engineering this is not a bug, the behaviour will be corrected via a Feature Request.

mldeller
Enthusiast
Enthusiast

Curious if you ever got a resolution to this?

Reply
0 Kudos
mastefani
Enthusiast
Enthusiast

Did your disks add in the correct order?   I was trying your solution as we had the same problem with the index method.  When a user adds 3 drives, sized 1,2,3 they appear to get added to the VM in random order though and not the order they were added into the array in.  

Reply
0 Kudos
Rahul418282
Contributor
Contributor

Anyone found solution of this? I also have requirement to add multiple disks during blueprint provisioning but count.index taking disk size of first disk itself. All attached disk are of same size.

 

https://docs.vmware.com/en/vRealize-Automation/8.1/rn/vRealize-Automation-81-releasenotes.html#befor...

Result is still same after changing to following code in yaml file

attachedDisks: ‘${map_by(resource.Cloud_Volume_1.id, id => {“source”:id})}’

 

Reply
0 Kudos
sonalkakode
Contributor
Contributor

Did you get a solution to this yet. I am facing same issue with adding multiple disks to VM.

Tags (1)
Reply
0 Kudos
nhlpens87
VMware Employee
VMware Employee

I solved the issue with ordering.  See this package on vmware code:  https://code.vmware.com/samples?id=7490 

--Jim
mldeller
Enthusiast
Enthusiast

Good stuff.  I had written a very similar workflow to call a diskpart script in guest (Windows only) to format/letter/label the drives.  Any idea if this functionality will be included in a later release of 8.x or should we plan to continue using these work-around methods?

nhlpens87
VMware Employee
VMware Employee

Continue using the workarounds, for now.  Eventually, these features make their way into the product.  Custom host naming is a prime example.

--Jim
Reply
0 Kudos
xian_
Expert
Expert

Reply
0 Kudos
Czernobog
Expert
Expert

It seems to be working fine since 8.3, see https://docs.vmware.com/en/VMware-Cloud-Assembly/services/Using-and-Managing/GUID-8CEA3A73-5FA3-4EFB... there's an example titled "vSphere machine with a dynamic number of disks".

I'm working on something similar for networks now, but this problem seems to be solved.

Tags (1)
Reply
0 Kudos
DiegoTM
VMware Employee
VMware Employee

Reply
0 Kudos
krishna20july
Contributor
Contributor

Hi ,

 

can you share provide more information about the workflow you have mentioned in your comment 

Reply
0 Kudos
krishna20july
Contributor
Contributor

can you share more info on your workflow 

Reply
0 Kudos