peterbrown05's Accepted Solutions

Hi Eric Feels like a long time since we last chatted! Hope you are well!! Since then, I have now moved teams and working as part of the Cloud Foundation Team... so my knowledge of the latest ... See more...
Hi Eric Feels like a long time since we last chatted! Hope you are well!! Since then, I have now moved teams and working as part of the Cloud Foundation Team... so my knowledge of the latest updates around USB and RTAV is limited. Hopefully someone else on that team can provide comment. That said, RTAV will redirect the device automatically if it is left local on the client side. if you want to redirect using USB then simply redirect it using USB (Rtav doesnt use USB)...  but explicitly redirect the device using USB should work. i dont recall if its possible to rename the device name for the rtav virtual camera - i dont think so. As such usb might be the best option for you cheers peterb
Hi Dionne, I have been able to create a POC environment without 'spending any money', however, Horizon Cloud Service requires the use of F Series cores (for the jump box). These are not availabl... See more...
Hi Dionne, I have been able to create a POC environment without 'spending any money', however, Horizon Cloud Service requires the use of F Series cores (for the jump box). These are not available on the free trial, and in order to request these, you do need to convert to the PAYG model. That said though, you can convert - deploy a node, spin up some desktops, do some testing all within the 1 month, and if required then avoid spending anything more than the free $200 trial - providing you then delete all objects and stop within the 1 month. Im sure you will enjoy the service though, and so stopping shouldn't be something you want to do! ;D cheers peterb
Hi Dionne, Technically we could support Owner role as well as Contributor role. But as you mention, the Owner role means that with that service principle you have permissions to grant access to ... See more...
Hi Dionne, Technically we could support Owner role as well as Contributor role. But as you mention, the Owner role means that with that service principle you have permissions to grant access to anyone you wish. As a result, VMware should not be given credentials with such 'power'. As such, our code explicitly checks (and will only allow) a service principle with the contributor level access. Owner access is not supported. hope this helps clarify, cheers peter
Hi Steven Horizon Cloud Service on Microsoft Azure supports Internet Accessible desktop as part of the service price (ie its included for free!) This provides a pair of Unifed Access Gateway ap... See more...
Hi Steven Horizon Cloud Service on Microsoft Azure supports Internet Accessible desktop as part of the service price (ie its included for free!) This provides a pair of Unifed Access Gateway appliances which can be automatically deployed into the Azure environment if required. To configure it, when adding the node you need to provide a DMZ network CIDR range (/28 required), a FQDN (which is a Domain name you own/have registered) and a valid certificate for this fqdn. The appliances are then deployed. From your Settings->Capacity -> node details screen (as per screen shot below) you can read off the configured FQDN and also the Azure Load Balancer IP address. You need to setup a CNAME record for that Azure Loadbalancer IP and map it to your fqdn. If you are setting up just a test system and don't have a real fqdn yet, then you can add a local record to your hosts file on your client machines to map the system fqdn to the load balancer ip address (simply ping the Load Balancer FQDN to get its IP address). Once setup, then end user access can be done via that fqdn address (in example above,     apps.peterb.com In the future, when new updates are available to the service, the UAG appliances will be automatically upgraded, Hope this helps, cheers peterb
Hi Steven Good news - yes Skype for Business IS supported on Horizon Cloud Service on Microsoft Azure. All other features you expect on RDSH from Horizon7 are also supported (Real Time Audio... See more...
Hi Steven Good news - yes Skype for Business IS supported on Horizon Cloud Service on Microsoft Azure. All other features you expect on RDSH from Horizon7 are also supported (Real Time Audio Video, USB Storage Devices, ThinPrint, Smart Card, Scanner Redirection, Client Drive Redirection and much more! We support both Blast and PCoIP protocols. When you use the Auto-Image Creation wizard in the control plane (Inventory->Imported VM's, Import from Marketplace) under Advanced settings this gives you the option to deselect specific features if needed. By default, all features are enabled. Here's a screenshot of the options which can be found if you expand the Advanced Options section: hope this helps, cheers peterb
hi steven looks like you were close. but likely missing one step; there are 33 parts to this; creating MyVMware accounts for each of the admins (they dont need a new account if they alread... See more...
hi steven looks like you were close. but likely missing one step; there are 33 parts to this; creating MyVMware accounts for each of the admins (they dont need a new account if they already have one) entitling the admins access using their MyVmware account to the Control Plane ensuring they are a member of the Super Admins AD group you did step 3 but sounds like you were missing the first parts. Specifically then, all of your colleagues will need MyVMware accounts. They can register here: Registration - My VMware Once registered, you (or another authenticated admin) can use the Getting Started page (or via Settings -> General Settings) to add additional MyVMware users to the environment. From getting started page, it is done here: Clicking ADD, will get you to a dialog like this: Click the + button to add new rows. Note this user account must be registered in MyVMware as above. Finally, as you have already done, make sure that the user's AD account is a member of the super admins group that was granted access to teh system. Your colleagues will then be able to login using their myvmware creds, and onbound into an authenticated session using their AD credentials. hope this helps, cheers peterb
hi jayesh no you are not doing anything wrong! that data is actually collected and collated on an hourly basis. so rougly every hour you will see new data appearing. but as such it does lag pres... See more...
hi jayesh no you are not doing anything wrong! that data is actually collected and collated on an hourly basis. so rougly every hour you will see new data appearing. but as such it does lag present time by 1 hour before session counts show up. Really the intention of these ui's / data is not to show instantaneous session counts (for that, you can go and look at the activity pages, or assignment/farm level drill downs). rather it is intended to show trends over time and allow you to perform good capacity management for your user population. eg, have i provisioned too much (and its not being used, meaning i could save money), or, do i need more capacity in general, or perhaps do i need more capacity on a tuesday (based on specific activities your business needs) for example.... This is an area we will continue to evolve and enhance, so we would welcome your feedback for things that would be useful to see here, (remember to click more, and go and look at the longer termed views where you can hide/show specific data as needed). cheers peterb
hi jayesh sure you can. The easiest way to do this is to allow HCS to manage the deletion for you. this will tidy up all objects created from your azure subscription as well as deleting the node... See more...
hi jayesh sure you can. The easiest way to do this is to allow HCS to manage the deletion for you. this will tidy up all objects created from your azure subscription as well as deleting the node records in the cloud service To do this, go to Settings->Capacity and select the node you wish to delete, to drill into the node deletion page From there, click Delete. it will then prompt you to type in the node name to prove that you are sure you want to perform the deletion. in the example above, type (or copy paste) node390 into the box and click delete. Do note that this will delete the node, Access Gateway, and any images/pools, and any active sessions will be deleted. ie make sure end users are not using the system. When you come to add back a node, you can add the node, and reuse the existing azure subscription service principles, or add a new one. From there you can select the same, or a different region. Remember that if its a new region then to make sure you have sufficient quota headroom for the various vm family sizes, and remember you will need to set up a VNet/AD/DNS connectivity in that region. I know this doesnt apply to you - but if you were building out a node and realised you had made a mistake, then the getting started page does have a button allowing the node to be deleted too, this allows you to quickly delete and startover. Do note however in all cases that the resources are deleted from Microsoft Azure. Whilst this is automated, it can be slow-ish... we have seen this take 15-20 minutes (or more) and this is based on the load in the Azure environment at the time. If you want to create a new node reusing the same network address ranges then you would need to wait for the deletion to fully complete before you can add the new node. You can watch the deletion progress in the azure portal. hope that helps, cheers peterb
hi sophia so sorry about the delay in replying here. so.... Azure will automatically provide azure dns for new vnets created. This will work great for initial deployment, but wont be good eno... See more...
hi sophia so sorry about the delay in replying here. so.... Azure will automatically provide azure dns for new vnets created. This will work great for initial deployment, but wont be good enough when you come to need to resolve AD. As such, we recomend that prior to deploying a node, you get your networking/dns all up and running first once dns is configured, then you can configure the vnet with default DNS. any vms connected to that vnet will then auto inherit that / those dns servers. note; if you change the dns config on a vnet after deployment, then you have to restart any vm's connected to that network before they will pick up the dns changes. as for testing if its working; then the best recomendation i have is to deploy a vm (from azure marketplace - eg a linux machine, or a lightweight windows machine) and use that as a ping test box. make sure to deploy the vm onto the same subnets for which you are interested. (in the case of the Horizon Cloud Service node  - deploy the test vm onto the Tenant Network subnet vmw-hcs-<nodeID>-net-tenant once the vm is deployed, then using console/command prompt try to;   ping <your fqdn of your domain>    eg ping myDomain.corp try:    ping  cloud.horizon.vmware.com and this should help you identify if things look good. If you suspect intermittent dns (which can sometimes happen if not configured properly) then try: dig cloud.horizon.vmware.com and run that multiple times. this would show occasional failures if dns is intermittent. anyway, thats the best tips ive got, hope this helps, cheers peterb
so this value is connected to the Farm's maintenance operation. lets say you have a farm with 10 servers in it, and you configure the Scheduled maintenance to restart the servers on a sunday at ... See more...
so this value is connected to the Farm's maintenance operation. lets say you have a farm with 10 servers in it, and you configure the Scheduled maintenance to restart the servers on a sunday at midnight. the concurrent quiescing servers means; of those 10, how many are you willing to 'do that action' on concurrently (at the same time). if you set to a value of 2, then it will mean 2 servers will be quiesced at one time; as soon as they have no active sessions running on them, then they will restart. At most 2 servers will be in the quiescing state at any one time. Remember that a quiescing server cannot accept incoming connections - so if you want to ensure that end users can connect to the system durign this maintenance window you should make sure that   concurrent quiescing servers < max servers.   this way you will guarantee that a server is powered on available to accept incoming connections even whilst other servers are being quiesced and restarted. The system will iterate through all servers in a farm, as quickly as possible whilst maintaining that quiescing limit value. note the other modes of maintenance are to re-pristine the vm - specifically delete it, and recreate from the master image. this can be a good way to automate the roll out of new images if you frequently push updates which you want to get into production on a regular cadence. hope this helps peterb
hi jayesh see the response i gave on Whats the difference between capacity and utilization?   this will likely help a lot here. in summary though the warning is being given because the number o... See more...
hi jayesh see the response i gave on Whats the difference between capacity and utilization?   this will likely help a lot here. in summary though the warning is being given because the number of available 'things' (could be vm's , or family cores, or regional cores etc) are running low. And you may not be able to create as many vm's as you would like in future. So its a warning to go and ask Microsoft to extend the limit (via Subscription->Limits&Quotas in your Azure portal). I dont know specifically for your environment what is tripping this. But, if you go to: Settings->Capacity then, change the view to 'Type' now, next to the Azure Subscription friendly name, you will see a Progress bar indicating the subscription 'usage'. Click on this, and a dialog similar to the following will be displayed; from this, you should see which of the items (in your case) are triggering the warning, and you can either delete items from your azure subscription, or go and ask Microsoft to help extend those limits to give your environment more headroom. (Remember, you only pay for resources when they are being used. So if you ask for an additional 500 cores; that doesn't cost you anything until you use them. I do recommend proactively asking for extensions before you really need them. In some cases, I have seen extensions take a number of days; for smaller increases this can often happen very quickly (personally I have seen this happening in less than 10-15mins), but, if you are asking for thousands of cores then this will take longer. So, proactive management of this is recommended. hope this helps, cheers peterb
hi jayesh, think of capacity as being how much of your current azure quotas have you used up. if you asked for 100 cores, and you are using 68 of them (for whatever) then that will show 68% capa... See more...
hi jayesh, think of capacity as being how much of your current azure quotas have you used up. if you asked for 100 cores, and you are using 68 of them (for whatever) then that will show 68% capacity used. (its a little more complicated that this, which i will explain later). Some (not necessarily all) of the capacity is being used to provide the Horizon Cloud Service for end users to use. (eg for the underlying node/uag appliances, along with any servers that you provision for farms etc) Utilization is now talking about how much of your Horizon Cloud Service deployed servers as RDS Farms are actually being used for end users (and how much). Lets say you had a Farm with 1 server, with Max sessions per server = 20. and you have 10 users using it, then 10/(1*20) == 50% utilized.  ie your end users are using 50% of the provided capacity. if you had a farm with 10 servers in it, and 20 sessions per server, with just 10 users using it, then the utilization would be 10/(10*20) == 5% utilized. during the day as users login, and use the service then the utilization will increase, and as an administrator you should look to make sure that it doesn't hit (or get close to?) 100%. If it does get close to 100%, then assuming you still have some capacity headroom, then you can make the farm max size bigger to accommodate. In most cases, if your capacity gets close to 100% then you can request an increase in cores via the azure portal subject to any hard limits that Microsoft impose. Note that utilization considers 'max' servers in a farm, and isn't worried that some number of those servers may be powered off. powered off servers are still considered part of the utilization calculation denominator. For capacity (I said it was a little harder than I first presented); Horizon Cloud Service uses a number of different vm sizes. We also dont have a dedicated usage of your Azure subscription. you can deploy other vm's etc into the environment. From the Azure portal (Usage and Quotas) you can see something like this (I just snapped this from one of my test subscriptions): from this, you can see they are reporting manu different things. there isnt a single 'usage % number'. What we do in Horizon Cloud Service is to consider the 'worst case' for the items we need. (ie we dont worry about other azure vm families that you may have deployed etc), but for our service we know that you (currently) use Dv2 family of VM's for RDS Servers. We also use F series for jumpbox, and Av2 for UAG appliances. We need family cores, and regional cores for the node, and we need 'vms'.  So, our calculations look across all these values, and then we return as that top level number the 'worst case' %used out of them, and report that as that top level number. We have a UI that actually does a pretty good job at helping you  see how this is calculated. If you click Settings->Capacity, then view by 'Type' , next to the subscription name (which incidentally you can click to update the azure service principle - eg to rotate out your secret) there is a progress bar. (yes, thats a little obscure, and we are fixing that shortly!), anyway, click the progress bar, and a UI like this will be displayed: this in my case shows that the 'worst' item for eastus region for me is cores, and thats at 65%. So for me, my overall reported capacity is 65%. I have plenty of headroom on my DV2 family cores (32% used) etc.... The first thing thats going to run out here, and prevent me expanding my deployment will in my case be cores. It may well be different for you. Anyway, I hope that explains in detail the meaning of capacity and utilization, and you have a better understanding of how the capacity is calculated and where to go see that, cheers peterb
hi there, yes, the jumpbox is a short lived resource that orchestrates the build out of the environment initially. It is deployed, and then it manages downloading the required binaries, creating... See more...
hi there, yes, the jumpbox is a short lived resource that orchestrates the build out of the environment initially. It is deployed, and then it manages downloading the required binaries, creating subnets / resourcegroups / vm's etc, makes sure things are configured properly, and then orchestrates the pairing of the environment back to the Horizon Cloud Service control plane. Once done, and the node is connected, the jumpbox self-destructs and deletes itself. A (new) jumpbox is again needed in the future when an upgrade of the environment is triggered, or for specific maintenance/debugging operations. But in all cases this is short lived. fwiw; the jumpbox uses a Standard F2 (2 vcpus, 4 GB memory)  VM hope this helps. Ping back if you have further questions, cheers peterb
Great to hear the interest. The team is actively working on adding VDI support for Windows 10 desktop. A small beta program is expected in early 2018 followed by general availability in 1H2018. ... See more...
Great to hear the interest. The team is actively working on adding VDI support for Windows 10 desktop. A small beta program is expected in early 2018 followed by general availability in 1H2018. cheers peterb
These really provide control over 'power management' (or as might be better termed 'cost management!' on Azure) Specifically, lets not have servers running when we don't need them. If we can pow... See more...
These really provide control over 'power management' (or as might be better termed 'cost management!' on Azure) Specifically, lets not have servers running when we don't need them. If we can power down (Deallocate) servers, then we only pay the storage cost (which is significantly less than the compute cost) Min / Max help determine the minimum and maximum size of servers running in the environment for the farm at any time (based on user demand). Max servers governs the maximum number of vm's that will be created of the required size. if all powered on, this would govern the max cost of compute (network charges will vary based on use) that the farm would cost.    The maximum number of users that could use a farm = MaxServers * #Sessions per server Min servers sets the minimum servers that will run if no users are accessing the environment. Having >=1 server running means that a user will always be able to login. It is possible to set min=0 (this means when =0 that you are not paying for compute for that farm), however the initial login experience for the first user in that cause would result in a long logon time (as the server first needs to boot and prepare itself for use - typically 5-6mins) It is therefore recommended to set Min >=1 , and consider your typical user demand. It does take time for a new server to power on and prepare itself for use (5 to 6 minutes) and therefore you need to consider what Min servers should be configured to meet typical logon storm user demand. The user docs cover this in a bit more eloquent detail than I've done here! see Create a Farm in the docs. As for adjusting the sessions per server after farm has been created; you are right.... today, adjusting that later is not possible. But, we will be looking at making that possible in the future, cheers peterb
hi jayesh i wrote a white paper on this exact subject a few weeks back. Rather than try and repeat the summary/details here, i think best to point you at the link; https://www.vmware.com/cont... See more...
hi jayesh i wrote a white paper on this exact subject a few weeks back. Rather than try and repeat the summary/details here, i think best to point you at the link; https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/horizon-cloud-virtual-desktops/vmware_horizon… one thing to note ref your question on GPU; is that the current release for azure we only support 2k12 os for gpu, and this is limited (due to a driver limitation) to a max of 20sessions. We are working on 2k16 support, which will remove this 20session limitation and should be available soon. I will also update the white paper with new sizing info for 2k16 when its available. let me know if you have more questions after you have read the white paper, cheers peterb
great to hear that you got your node up and running! for GPU there are a few things to do here.... 1. Image creation in order to create an image that can be used in a farm for gpu, you have ... See more...
great to hear that you got your node up and running! for GPU there are a few things to do here.... 1. Image creation in order to create an image that can be used in a farm for gpu, you have to install the gpu drivers in that vm. its not possible to install the nvidia drivers if the hardware (in our case the M60 card in an NV6) is present --> as such, you *must* create the 'image' based on an NV6 backed machine. ie in the Imported-VM's page you need to select NV6 hardware when creating it Suggest when you create it, you name it to make it clear that this is GPU enabled . eg RDS-GPU-2k12 also note we only support 2k12 in the initial release of Horizon Cloud Service on Azure. We will be adding 2k16 support soon! but for now its 2k12 only. 2. Image Prep Right now, the auto-image creation service does NOT auto-install the gpu drivers. This is something we are looking to fix, but in the meantime, you need to manually install the nvidia drivers. Azure N-series driver setup for Windows | Microsoft Docs  details this pretty well, and provides the link for the NV6 drivers (its a different driver for 2k12 vs 2k16). RDP into the desktop, (recomend creating the image with a public ip address to simplify this, if your security permissions allow this) Download the drivers, install them, and then trigger a reboot. 3. Convert VM to image Now, convert that desktop to an image in the normal way. This is a GPU enabled image 4. Create RDS Farm(s) Finally, you can create farms using this image. Make sure when you do so that again you select the NV6 (GPU backed) size of vm for the farm. This will make sure that you now have a farm with a GPU hardware, with an image with the gpu drivers installed. Once you have your farm and login; I have found that using the little nvidia UI (find it in their application folder) allows you to see if the gpu is being used in the session or not.... (I cant remember its exact name/folder off hand), alternatively run the command line tool;  nvidia-smi  as per the doc link above . our main docs also detail this in a bit more detail than I have done here. hope this helps, cheers peterb
Hi Sophia it is possible to use your own azure image; but there are several steps that need performing manually to make it workable in Horizon Cloud Service. The Auto-Import functionality is re... See more...
Hi Sophia it is possible to use your own azure image; but there are several steps that need performing manually to make it workable in Horizon Cloud Service. The Auto-Import functionality is really the simplest option, as it takes a very recent (patched) version of Windows and then applies the necessary configuration to make it work. If however you wish to use your own image however, then here are the high level steps you'd follow;  (see the documentation for much more detail on this) you start with an azure created vm running a supported OS. (we dont currently support you uploading arbitrary images from outside of azure.) place this image in a specific resource group so that the azure node can 'see' the vm Ensure that the RDS role is enabled (this requires the vm to be domain join) - this is needed for RDS farms (desktop or application based) Install the Horizon7 agent (this has to be done after the RDS role is enabled) install the DaaS agent install (optionally) the UEM agent and the NVIDIA drivers (if you are using a NV6 machine) configure the daas agent, to communicate with the Azure Node (needs node's IP address) 'pair' the agent with the node by downloading the bootstrap file, and registering that in the vm. reboot the vm at this stage the vm should go 'Active' in the imported vm's page of the administration ui, and from there can be 'converted to an image' and then used in a farm as normal. The auto-import functionality takes care of all of the above (with exception currently of installing uem and nvidia agents, but this is something we are looking to enhance soon.) The documentation does a really good job to walk through the above steps in more detail, so do go and check that out, see Manually Build the Master Virtual Machine in Microsoft Azure for the html guide (similar section in the pdf version:  https://docs.vmware.com/en/VMware-Horizon-Cloud-Service/services/horizon-cloud-14-admin.pdf ) hope this helps, peterb
Hi Sophie No Problem! Its great you are asking questions. Feel free to ask as many as you need. We are here to help, :smileycool: so you are correct in that the initial release we did in Octo... See more...
Hi Sophie No Problem! Its great you are asking questions. Feel free to ask as many as you need. We are here to help, :smileycool: so you are correct in that the initial release we did in October for Horizon Cloud Service on Microsoft Azure had a limitation that you could only use CN=Computers; The VMware Horizon Cloud Service on Microsoft Azure Release Notes state: Editing the default value in the domain join's Default OU field in the Active Directory page does not persist in the system. Even though you can edit the domain join's default OU value on the Administration Console's Active Directory page, the change does not persist in the system. The default OU for the AD domain registered with the node continues to be CN=Computers. However, even though the node's default OU is CN=Computers, you can change the default OU for a farm and assignments are created using the farm's OU. Workaround: Use the farm's OU field to set the default OU for assignments based on that farm. however, this being a cloud service; we did actually push an update yesterday :smileymischief: which improves this (but doesn't 100% fix the issue). let me explain.... so until Monday of this week, you could only override the default OU at the RDS Farm creation level. With the update this Monday, when you first create the node, there is a UI which allows you to override the default OU (which will be applied for all farm creation, unless also overridden as above). This will now work ! yay! If you have already deployed a node, then you can edit this setting by going to:   Settings--> Active Directory and then 'Edit' the Domain Join section. You can then update the Default OU value. Note that this should be in the form OU=NestedOrgName, OU=RootOrgName, DC=DomainComponent   etc. ive attached a screenshot of this UI. What remains however, is that importing an image from the marketplace will continue to use the CN=Computers value. This is something we are working on fixing now, and I hope to have an update for this issue very soon! hope this helps, cheers peterb
Hi Sophie cool, thanks for the info. So; you are in part right. Horizon Cloud Service on Microsoft Azure does not natively support Azure Active Directory (AAD). ie, it cannot directly use AAD... See more...
Hi Sophie cool, thanks for the info. So; you are in part right. Horizon Cloud Service on Microsoft Azure does not natively support Azure Active Directory (AAD). ie, it cannot directly use AAD for everything that Horizon Cloud Service needs. Specifically, when creating Farms for desktops/apps then we need to register machines in a domain. AAD provides an identity only. Also, our servers and agents talk LDAP rather than the RestAPI that AAD requires. HOWEVER, (and my white paper covers this in more detail), you can if appropriate make use of Azure Active Directory Domain Services (AAD-DS). This is something that acts as a managed AD service and runs in Azure (Microsoft take care of operating it, including patching etc), and it sync's its identity from AAD. There are some things to take note of here though; must have password hashing enabled in a specific way; if not, you will need all users to reset their passwords for the hashes to be regenerated for use with AAD-DS. Also, AAD-DS provides a flat hierarchy, and I do not believe it replicates any OU structure from on-premises. ie Azure then becomes like an island domain. This isnt specific to Horizon Cloud Service, and Im by no means an expert on all the options available here. But, certainly connecting What I would reccomend you investigate is configuring like this: Install AAD-Connect on premise - this will replicate your user identities to AAD (without the dependency on the VPN) Use AAD to provide common cloud identity Make use of AAD-DS to replicate that identity and allow that to be used by Horizon Cloud Service on Azure Make use of the VPN only for end user connections back to base - for data and/or any on premises hosted services/backends. https://docs.microsoft.com/azure/active-directory-domain-services/active-directory-ds-overview is a really good overview to the AAD-DS feature of Azure. as mentioned though, this isn't the only way for AD to be connected into the system. Hosting it locally, or connecting to on prem via VPN are viable options too. I will share the white paper link when it is published later this week, hope this helps, cheers peterb