I 'm managing several ESX servers. These ESX servers are located on different subnets ( for example: 192.168.10.x , 192.168.35.0 , 192.168.64.x etc ) .
Of course all these esx servers can ping each others.
I want to install a Virtual center server to manage all these ESX servers from a common place.
Before doing so, I need to be sure that it's possible to manage ESX servers that are located on different subnets, from a single Virtaul Center server.
Also, will I be able to use advanced features such as vmotion, HA, DRS etc event if the hosts are on different subnets ?
Also,is there a preference regarding the subnet on which I'm going to put my virtual center server ?
Many thanks.
Dan
Yep, shouldn't be a problem as long as the correct ports are allowed through (assuming there's a firewall type device in between the subnets). I currently manage hosts from multiple physically seperate locations out of a single VC server with no issue. In regards to using features such as VMotion, DRS, HA, etc... these all carry a requirement of shared storage between the hosts and gigabit ethernet (for VMotion/DRS), so if you're looking at using this for hosts in different locations seperated by a WAN, that may be a no-go.
Yep, shouldn't be a problem as long as the correct ports are allowed through (assuming there's a firewall type device in between the subnets). I currently manage hosts from multiple physically seperate locations out of a single VC server with no issue. In regards to using features such as VMotion, DRS, HA, etc... these all carry a requirement of shared storage between the hosts and gigabit ethernet (for VMotion/DRS), so if you're looking at using this for hosts in different locations seperated by a WAN, that may be a no-go.
Great !!
Many thanks for your ( correct ) answer.
Yes, there is no firewall between these internal subnets. Every traffic can go through wihout filtering. And all subnets are on the same LAN ( no Wan ).
Regarding our shared storage, we user Openflier for iScsi LUNs, and it's also possible to allow subnets to access the LUNs. So it may not be an issue.
Best Regards,
Dan
Hello,
Moved to VirtualCenter forum.
Best regards,
Edward L. Haletky
VMware Communities User Moderator
====
Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.
SearchVMware Blog: http://itknowledgeexchange.techtarget.com/virtualization-pro/
Blue Gears Blogs - http://www.itworld.com/ and http://www.networkworld.com/community/haletky
As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization
Hi, I hope i'am not offending any rules by waking up an old post. I noticed its on topic and didn't want to open another one.
I see (and found out) that datacenters physically parted are supported. My situation is the following:
We have 6 esx servers (with VC) on one site (A) and now added two more at another site (B) within our wan. These two subnets are connected by a 3Mbps line. In my reasoning I created a new datacenter (SiteB) and added the two servers without any issue to our existing VC at SiteA.
But after testing some VM's on SiteB I see they do not respond for a second (looks like latency in the VM's, clicking a button seems to freeze for a sec etc..), checked the performance charts and there is nothing to be found that could explain this behaviour.
So I was wondering, could it be that the traffic sent out and back to/from the VC on SiteA cause these behaviour. What is the bandwith used by the esx servers?
Can I setup a second VC on SiteB and use the same license file (so we'll end up with two license servers)?
Thanks!
The bandwidth requirements between the VC server and the ESX hosts should be minimal. When you say that the VMs in SiteB don't respond for a second, do you mean that they do not respond to client requests as quickly, or is the remote console a bit less smooth than usual? I would be surprised if simply managing an ESX host in one site with a VC in another site affects performance of that ESX host's workloads. At the same time, however, I wouldn't be surprised if things like Remote Console functionality didn't work as well over the WAN, as it's basically just VNC with no low-bandwidth optimizations (full resolution, color depth, and no compression). The VM may appear to stop responding by how it looks on the remote console, but I highly doubt that the VM is actually frozen. Using something like Remote Desktop or VNC Server within the VMs to connect to them would probably work better for you since you can tweak the connection settings a bit.
Hope that helps! Please help me out by marking my response as "helpful" or "correct" if you feel that it was useful!
-Amit