timcoote
Contributor
Contributor

Virtual Switches

Jump to solution

Hullo

Are virtual switches a necessary component of the VMWare infrastructure model? I'm thinking of banning them from my enterprise architecture and i'd like to know what I lose. I know that I'll gain a simpler deployment architecture and at least have a chance of being able to identify biz application to physical hardware dependencies, but I"m sure that I'm going to lose something.

Hope someone can help me.

cheers

Tim

0 Kudos
1 Solution

Accepted Solutions
Texiwill
Leadership
Leadership

Hello,

I have security concerns around Integrity and availability (defining security as Confidentiality,Integrity, Availability), and maybe that link addresses these concerns.

They should but I am not sure your exact concerns.

In case it doesn't, I want to reduce the complexity here - i seem to have part of the network configuration inside a host computer - this is going to drive up my costs and risk of misconfiguration hugely.

Not really. You can no longer look at the 'system' as a host computer, but when ESX is installed it is a hybrid compute, network, storage appliance. That view of ESX will help you view the system better. Remember a physical NIC is just an uplink port with ESX installed on the host.

I've spent the last few years measuring how good companies are at understanding how their business apps relate to their server and networking infrastructure. It's not pretty. There are very good reasons for it not being pretty, but more complexity than necessary is the last thing that I want to allow the design teams as, over time, the design assumptions will be broken in implementation and the implementation teams won't understand the design objectives.

You are correct but trying to force something onto Virtualization that can not physically happen is also a possibility for a bad design.

I'll stop pontificating from a position of ignorance of the product now, and read up on what it does.

That is the best approach.

Do you know if there's a design document that justifies this design that's available to the world?

ESX, network, which specifically? I did point to quite a bit of blogs I have written that cover security in detail as well as virtual networking in detail. These should help you.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.

Blue Gears and SearchVMware Pro Blogs: http://www.astroarch.com/wiki/index.php/Blog_Roll

Top Virtualization Security Links: http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links

--
Edward L. Haletky
vExpert XIV: 2009-2022,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill

View solution in original post

0 Kudos
31 Replies
gary1012
Expert
Expert

You'll lose connectivity. You'll probably want to check this out to understand the virtual switch concept and how they're used in a VI3 environment: http://www.vmware.com/files/pdf/vmi_cisco_network_environment.pdf.

Community Supported, Community Rewarded - Please consider marking questions answered and awarding points to the correct post. It helps us all.
NTurnbull
Expert
Expert

As Gary said, vswitches are the only way to attach a vm to a physical network and I believe that vswitches function as layer 2. If you think that a vnic takes a port on the vswitch, a pnic takes a port too which provides the uplink.

Thanks,

Neil

Thanks, Neil
timcoote
Contributor
Contributor

Thanks. I'll read that. In the meantime, I still don't see why I'm going to lose connectivity - I can quite happily plug in and out a server nic into a switch and it will re-establish connectivity. It won't be instant, but fast enough for most circumstances. does this stop working with Vmotion or something? Why do I need this extra layer of abstraction? It's certainly my experience that the network people are seriously out of the loop as far as business applications are concerned and I really don't want to draw them in by having to share application dependency information/ train my server teams in the nasties of switched networks. In practice, my suspicion is that this extra layer would increase my costs (extra stuff to manage and get wrong) and risks (unidentified failure modes and more challenging troubleshooting)

In an operational environment it's hard enough to find out which apps depend on which physical switches and vice versa, much worse if I'm looking for spofs. Inserting the virtual switch layer makes the problem much worse. I guess that either I don't need them or VMotion doesn't work outside of a Cisco environment.

This doc talks about a Cisco environment. If virtual switches are needed (for VMotion?) in a Cisco environment, what happens if I only use HP? Can I do away with them in a Cisco environment?

(maybe I'll know the answer when I've read the document :smileygrin: it's only 90 pages)

0 Kudos
timcoote
Contributor
Contributor

This just gets worse... I've now got my network team trying to demand access to my esx servers to get snmp access to the virtual switches that are hosted on servers. The ESX team's going to push back on this (being part of the server group, they won't have snmp monitoring turned on by default.

Goodness knows how the network team's going to work out the host -> switch relationships as they usually don't even model that a host can have many IP addresses!

0 Kudos
TomHowarth
Leadership
Leadership

ESX unlike the hosted products uses vSwitches to connect the VM's to the phyiscal network ithey are an intergral part of the product. they are not an optional extra

If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points

Tom Howarth

VMware Communities User Moderator

Blog: www.planetvm.net

Tom Howarth VCP / VCAP / vExpert
VMware Communities User Moderator
Blog: http://www.planetvm.net
Contributing author on VMware vSphere and Virtual Infrastructure Security: Securing ESX and the Virtual Environment
Contributing author on VCP VMware Certified Professional on VSphere 4 Study Guide: Exam VCP-410
0 Kudos
gary1012
Expert
Expert

I'm still uncertain what you're trying to do. I can tell you that ESX will for certain create at least one virtual switch for the service console to reside on. From there, if you intend on providing your virtual machines connectivity to other devices outside of the ESX host, you will have to create either another port group on the service console virtual switch or create a new virtual switch.

As for the network turf war; I've been there too. Once the concept is explained and understood, the network folks realize that they cannot manage these virtual switches like any other traditional switch. The virtual switches do not have the ability to be managed through anything other than what VMware provides. The next ESX version opens that up a bit by provide third party vendors the ability to create their own flavor of virtual switch. Additionally, snmp is possible but it's going to be for the whole host and may not provide the granularity desired by your network team.

Even though the link I provided you is Cisco-centric, many of the concepts translate to other network types. You should be able to give that to your network folks and they should understand the underlying concepts.

Hope this helps...

Community Supported, Community Rewarded - Please consider marking questions answered and awarding points to the correct post. It helps us all.
0 Kudos
timcoote
Contributor
Contributor

Ok. I can see how they work. I still can't see why they're needed. What shortcomings pre VI3 do they overcome? I've seen several VMs connected to a single pnic on ESX, I don't see why putting in this extra complexity helps me. The Cisco paper that Gary pointed out shows a lot of network specific configuration issues that I'd rather hide from the guys that are configuring the VMs, and many of which are mostlly abused in implementation, or compete with other 'solutions' in the technology stack.

If I must have them, I guess that I can null them out by insisting on one per pnic, which gives me a simpler, easiliy identifiable topology.

0 Kudos
Texiwill
Leadership
Leadership

Hello,

First things. If you ban vSwitches in your switching network you loose all connectivity from the VMs to anything else. You can not direct traffic to and from VMs except through a virtual switch. This is even the case when the Cisco vSwitch is in play for VI.next release.

SO doing this would in effect give you no access to the management of your Infrastructure and NO access to your VMs over the network.

This is not a good thing. I also believe you should start by reading The documents listed under the ESX/ESXi section of http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links since your concerns are security related.

This just gets worse... I've now got my network team trying to demand access to my esx servers to get snmp access to the virtual switches that are hosted on servers. The ESX team's going to push back on this (being part of the server group, they won't have snmp monitoring turned on by default.

You cannot snmp monitor the virtual switches, it is not currently possible. So that is just not an issue. The vSwitch is a Layer 2 unmanaged switch with no capability to get the data you want out of it. The would have to monitor the pSwitch port to which the physical NIC is connected.

Goodness knows how the network team's going to work out the host -> switch relationships as they usually don't even model that a host can have many IP addresses!

I would have your network team also read the documents within ESX/ESXi section of http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links so that they also understand how virtual networking works. You may wish to only send them the first three or so links.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.

Blue Gears and SearchVMware Pro Blogs: http://www.astroarch.com/wiki/index.php/Blog_Roll

Top Virtualization Security Links: http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links

--
Edward L. Haletky
vExpert XIV: 2009-2022,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
timcoote
Contributor
Contributor

Hi Tom

Can't find a Helpful button next to your post. Looks like I'm nearly there tho': from my pov, the virtual switches are a design feature that I'm going to have to eliminate. I think that I can just ignore them and assign one switch per physical NIC. I guess that this just isn't the right forum to discuss why the design's like it is, but that's fine.

Tim

0 Kudos
gary1012
Expert
Expert

"I've seen several VMs connected to a single pnic on ESX, I don't see why putting in this extra complexity helps me." - Yes, they're connected into a virtual switch. Bottom line - virtual switches are absolutely required.

Community Supported, Community Rewarded - Please consider marking questions answered and awarding points to the correct post. It helps us all.
0 Kudos
Texiwill
Leadership
Leadership

Hello,

Ok. I can see how they work. I still can't see why they're needed.

There is no network connectivity to VMs without a vSwitch. That is HOW they work. You can create a VM with no vNIC and therefore not need network connectivity but you need a vSwitch to have connectivity.

What shortcomings pre VI3 do they overcome?

It is not a shortcoming, it is the way VI3 works. It was the way VI2 worked as well.

>I've seen several VMs connected to a single pnic on ESX, I don't see why putting in this extra complexity helps me.

They go through a vSwitch regardless.

>The Cisco paper that Gary pointed out shows a lot of network specific configuration issues that I'd rather hide from the guys that are configuring the VMs, and many of which are mostlly abused in implementation, or compete with other 'solutions' in the technology stack.

You can easily hide the vSwitch from the VM. All they need do is take the virtual hardware you provide, connected to the proper vSwitch and install the OS. Yes it is easy to abuse the situation, but you must have vSwitch to have network connectivity. You must audit to which vSwitch VMs are connected or just do not let people have the ability to make those changes.

If I must have them, I guess that I can null them out by insisting on one per pnic, which gives me a simpler, easiliy identifiable topology.

Absolutely a waste, you can have 100s of VMs within a host. This implies you would need 100s of pNICs to make it work. Which is not physically possible. 20 or 30 to 1 compressions of VMs to hosts is common, do you want to manage that much cable? Can your machines handle that much pNIC?

You may wish to read through my blogs on virtual networking at http://www.astroarch.com/wiki/index.php/Blog_Roll as a way to possibly understand how pNICs work within the virtualization layer. Consider each pNIC to be an uplink from a physical switch to a virtual switch. The virtual switch is an intrinsic part of virtualization and without it, you have no network connectivity.

I would also consider doing some research on the basics of virtualization, this view of the vSwitch is misplaced at best. Check out http://www.astroarch.com/wiki/index.php/Virtualization_Bookshelf for some books and references that will help you to understand VMware Virtual Infrastructure.

As an aside, the Helpful/Correct buttons are not within the posts but usually on the left hand side of the browser window under the handles used by the posters of the threads. If they are not there then the post was not created as a question.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.

Blue Gears and SearchVMware Pro Blogs: http://www.astroarch.com/wiki/index.php/Blog_Roll

Top Virtualization Security Links: http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links

--
Edward L. Haletky
vExpert XIV: 2009-2022,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
timcoote
Contributor
Contributor

HI Texiwill

Thanks for the useful links.

I have security concerns around Integrity and availability (defining security as Confidentiality,Integrity, Availability), and maybe that link addresses these concerns. In case it doesn't, I want to reduce the complexity here - i seem to have part of the network configuration inside a host computer - this is going to drive up my costs and risk of misconfiguration hugely. I've spent the last few years measuring how good companies are at understanding how their business apps relate to their server and networking infrastructure. It's not pretty. There are very good reasons for it not being pretty, but more complexity than necessary is the last thing that I want to allow the design teams as, over time, the design assumptions will be broken in implementation and the implementation teams won't understand the design objectives.

I'll stop pontificating from a position of ignorance of the product now, and read up on what it does.

Do you know if there's a design document that justifies this design that's available to the world?

Tim

0 Kudos
NTurnbull
Expert
Expert

a virtual nic (vnic) is exactally that, it has no path to connect to a physical port on the back of the ESX machine. The only way to direct the traffic coming out of these 20 vnics and out of the 2 (example) physical nics (pnic) is to put a 'bridging' mechanism in between the 2.

If you think of these vnics having a virtual cable plugging into this vswitch and the also plugging into this vswitch is another virtual cable provided by ESX which connects to the back of the pnic. If your going to go down the path of having one pnic for each vm then your going to find the following: Your going to have a vswitch for each vm. You'll have no redundancy if one pnic fails, the vm goes down. You'll have the same amount of ethernet cables out of the ESX host as you have vm's - you'll run out of pnics!

Have a look at the VMware Virtual Network Concepts guide, oh and I dont believe you can have SNMP monitoring on a vswitch

Thanks,

Neil

Thanks, Neil
0 Kudos
Texiwill
Leadership
Leadership

Hello,

I have security concerns around Integrity and availability (defining security as Confidentiality,Integrity, Availability), and maybe that link addresses these concerns.

They should but I am not sure your exact concerns.

In case it doesn't, I want to reduce the complexity here - i seem to have part of the network configuration inside a host computer - this is going to drive up my costs and risk of misconfiguration hugely.

Not really. You can no longer look at the 'system' as a host computer, but when ESX is installed it is a hybrid compute, network, storage appliance. That view of ESX will help you view the system better. Remember a physical NIC is just an uplink port with ESX installed on the host.

I've spent the last few years measuring how good companies are at understanding how their business apps relate to their server and networking infrastructure. It's not pretty. There are very good reasons for it not being pretty, but more complexity than necessary is the last thing that I want to allow the design teams as, over time, the design assumptions will be broken in implementation and the implementation teams won't understand the design objectives.

You are correct but trying to force something onto Virtualization that can not physically happen is also a possibility for a bad design.

I'll stop pontificating from a position of ignorance of the product now, and read up on what it does.

That is the best approach.

Do you know if there's a design document that justifies this design that's available to the world?

ESX, network, which specifically? I did point to quite a bit of blogs I have written that cover security in detail as well as virtual networking in detail. These should help you.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.

Blue Gears and SearchVMware Pro Blogs: http://www.astroarch.com/wiki/index.php/Blog_Roll

Top Virtualization Security Links: http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links

--
Edward L. Haletky
vExpert XIV: 2009-2022,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
timcoote
Contributor
Contributor

Hello,

> It is not a shortcoming, it is the way VI3 works. It was the way VI2 worked as well.

you can see how much ignorance I'm working from here. Thanks for the lesson. :smileygrin: Is there anything that helps me to understand why ESX does not just work like the host based VMWare ('scuse me if I get the terminology wrong).

> You can easily hide the vSwitch from the VM. All they need do is take

the virtual hardware you provide, connected to the proper vSwitch and

install the OS. Yes it is easy to abuse the situation, but you must

have vSwitch to have network connectivity. You must audit to which

vSwitch VMs are connected or just do not let people have the ability to

make those changes.

I need to be clearer - the problem I have is not what the VM can see, it's what the guy looking at the whole infrastructure can see when she's looking for a single point of failure or trying to resolve an incident on a business application which probably spans dozens of computers (physical or virtual), with communications on thousands of IP addresses. To her the VMs are just computers and have no context.

> Absolutely a waste, you can have 100s of VMs within a host. This

implies you would need 100s of pNICs to make it work. Which is not

physically possible. 20 or 30 to 1 compressions of VMs to hosts is

common, do you want to manage that much cable? Can your machines handle

that much pNIC?

I'm not being clear again. I meant to have one vswitch per pnic, which is (I thought, but could be wrong) topologically how the hosted service works and means that I can easily identify business application to infrastructure dependencies.

> You may wish to read through my blogs on virtual networking at

as a way to possibly understand how pNICs work within the

virtualization layer. Consider each pNIC to be an uplink from a

physical switch to a virtual switch. The virtual switch is an intrinsic

part of virtualization and without it, you have no network connectivity.

I'll do that - great learning curve this - I must say, the concept of having an uplink within my configuration frightens me: I want to treat all switches as edge switches, I don't want functionality buried in the core network.

> I would also consider doing some research on the basics of

virtualization, this view of the vSwitch is misplaced at best. Check

out for some books and references that will help you to understand VMware Virtual Infrastructure.

Looking forward to understanding why my initial view was misplaced. I spend a lot of my time throwing out technology that adds no business value, so I need to get my facts straight. Also, one of my colleagues recently finished his PhD in virtualisation and I/O, so I'll get his pov, too.

>As an aside, the Helpful/Correct buttons are not within the posts but

usually on the left hand side of the browser window under the handles

used by the posters of the threads. If they are not there then the post

was not created as a question.

I got one of the buttons, not the other for your post.

thanks. really helpful.

Can I assume that the docs you refer to will help me to understand the business value of the virtual switch layer over simply binding vnics to pnics? I can see that I could use the switches to provide some finer grain control of bandwidth allocation to machines by configuring the set of switches on an ESX server a particular way, but that would all break if I moved one of the VMs. Can you see the issue from my pov: 10s of thousands of physical and virtual servers across a few data centres. How do I confirm that they are all set up correctly and don't have any single points of failure? Can I even use Ciscoworks to identify which VMs are connected to which physical switches?

0 Kudos
timcoote
Contributor
Contributor

> I'm still uncertain what you're trying to do.

I'm setting a candidate architecture for the use of VI that has minimal cost and minimal risk across all of the technologies. Ideally I want to keep the network separate from the servers as I want to partition the problem and minimise the skills needed to design, implement and operate.

> Once the concept is

explained and understood, the network folks realize that they cannot

manage these virtual switches like any other traditional switch. The

virtual switches do not have the ability to be managed through anything

other than what VMware provides. The next ESX version opens that up a

bit by provide third party vendors the ability to create their own

flavor of virtual switch. Additionally, snmp is possible but it's going

to be for the whole host and may not provide the granularity desired by

your network team.

That's all fine and dandy, but why do I have these extra pieces of technology in my estate?

> Even though the link I provided you is Cisco-centric, many of the

concepts translate to other network types. You should be able to give

that to your network folks and they should understand the underlying

concepts.

My experience is that the network is usually both over and under engineered. The last thing that I want these guys to have is more toys.

> Hope this helps...

Very much. Thanks.

Tim

0 Kudos
Texiwill
Leadership
Leadership

Hello,

you can see how much ignorance I'm working from here. Thanks for the lesson. :smileygrin: Is there anything that helps me to understand why ESX does not just work like the host based VMWare ('scuse me if I get the terminology wrong).

ESX is an OS, VMware Server runs upon another OS. ESX is its own OS that makes a hybrid network, compute, storage device out of the system.

I need to be clearer - the problem I have is not what the VM can see, it's what the guy looking at the whole infrastructure can see when she's looking for a single point of failure or trying to resolve an incident on a business application which probably spans dozens of computers (physical or virtual), with communications on thousands of IP addresses. To her the VMs are just computers and have no context.

VMware Roles and Permissions control this. However, you can treat each VM as its own host that will not change. You will have a Virtualization Administrator that will aid in providing data to possibly solve this. The standard windows Admin does need to know they are virtualized but debugging of problems within the VM are mostly OS based. They would work with your virtualization administration team. With this many VMs you need, a group that just maintains the virtualization hosts is a necessity.

I'm not being clear again. I meant to have one vswitch per pnic, which is (I thought, but could be wrong) topologically how the hosted service works and means that I can easily identify business application to infrastructure dependencies.

Yes, but you want redundancy. The topology blogs I wrote should help.

I'll do that - great learning curve this - I must say, the concept of having an uplink within my configuration frightens me: I want to treat all switches as edge switches, I don't want functionality buried in the core network.

The vSwitch becomes your new Edge switch. Yes it is a steep learning curve.

Looking forward to understanding why my initial view was misplaced. I spend a lot of my time throwing out technology that adds no business value, so I need to get my facts straight. Also, one of my colleagues recently finished his PhD in virtualisation and I/O, so I'll get his pov, too.

Virtualization will add business value however it will change how things are done and you will possibly need more processes in place.

I got one of the buttons, not the other for your post.

Only so many helpful and correct buttons available.

Can I assume that the docs you refer to will help me to understand the business value of the virtual switch layer over simply binding vnics to pnics?

This is a conceptual problem. It is not possible to bind vNICs to pNICs, not even in the hosted solutions. On VMware Workstation and Server you are binding pNICs to vBridges to vNICs. On ESX you are binding pNICs to vSwitches to vNICs.

I can see that I could use the switches to provide some finer grain control of bandwidth allocation to machines by configuring the set of switches on an ESX server a particular way, but that would all break if I moved one of the VMs.

Yes and no, it depends on your design. Moving VMs is not something you would do all that often. the virtual network can be designed to span all your ESX hosts.

Can you see the issue from my pov: 10s of thousands of physical and virtual servers across a few data centres. How do I confirm that they are all set up correctly and don't have any single points of failure?

You would want to use a configuration control tool to make sure things do not change. Tripwire and Configuresoft have such tools for virtualization hosts.

Can I even use Ciscoworks to identify which VMs are connected to which physical switches?

Cisco Discovery Protocol (CDP) does work within the vSwitch. I do not use Ciscoworks so cannot answer that specifically but CDP does work, which is very useful.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.

Blue Gears and SearchVMware Pro Blogs: http://www.astroarch.com/wiki/index.php/Blog_Roll

Top Virtualization Security Links: http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links

--
Edward L. Haletky
vExpert XIV: 2009-2022,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
Ken_Cline
Champion
Champion

I need to be clearer - the problem I have is not what the VM can see, it's what the guy looking at the whole infrastructure can see when she's looking for a single point of failure or trying to resolve an incident on a business application which probably spans dozens of computers (physical or virtual), with communications on thousands of IP addresses. To her the VMs are just computers and have no context.

This is a valid concern, and you're right - from the application administrator's point of view, it's "just a computer". There currently are no really good management tools in the native VMware application stack to address this problem; however, there are some third party tools that are beginning to improve the visibility of the components needed to support a service - including all the virtual pieces.

I'm not being clear again. I meant to have one vswitch per pnic, which is (I thought, but could be wrong) topologically how the hosted service works and means that I can easily identify business application to infrastructure dependencies.

Having only one vSwitch per pNIC would provide you with exactly what you're trying to avoid - a SPOF. When you create a vSwitch, you assign (hopefully) meaningful names. You could, for example, have a single vSwitch that has two pNICs associated with it. You would want each pNIC connected to a different pSwitch for fault tolerance. Let us assume that this vSwitch is to be used to support the MegaUpgrade project - you could assign the name "MegaUpgrade" to the vSwitch - this would actually simplify your administrators' job - they wouldn't have to worry about things like VLAN numbers, etc. All they need to know is that they have a MegaUpgrade system, so they need to connect it to the MegaUpgrade vSwitch.

I'll do that - great learning curve this - I must say, the concept of having an uplink within my configuration frightens me: I want to treat all switches as edge switches, I don't want functionality buried in the core network.

This is simply a design concept that you're going to have to accept.

Looking forward to understanding why my initial view was misplaced. I spend a lot of my time throwing out technology that adds no business value, so I need to get my facts straight. Also, one of my colleagues recently finished his PhD in virtualisation and I/O, so I'll get his pov, too.

Well, I'll have to disagree that there is no business value to a vSwitch. Without the vSwitch (or similar) technology, an ESX host would be severely limited on the number of virtual machines it could host. This would drive down the ROI that could be realized (and drive up the TCO). Also, bringing the flexibility to reconfigure portions of your network with the click of a mouse rather than the movement of a cable is a significant business driver. Many, many server outages are caused by a technician either moving the wrong cable or inadvertently plugging the right cable into the wrong port.

Can I assume that the docs you refer to will help me to understand the business value of the virtual switch layer over simply binding vnics to pnics? I can see that I could use the switches to provide some finer grain control of bandwidth allocation to machines by configuring the set of switches on an ESX server a particular way, but that would all break if I moved one of the VMs. Can you see the issue from my pov: 10s of thousands of physical and virtual servers across a few data centres. How do I confirm that they are all set up correctly and don't have any single points of failure? Can I even use Ciscoworks to identify which VMs are connected to which physical switches?

The documentation isn't going to directly address the business value of a vSwitch. I believe most of the referenced documentation is going to deal with the technology aspects of virtualization. The business value of vSwitches is essentially the same as the business value of virtualization in general:

- - Increased ROI on your server and infrastructure equipment

- - Decreased TCO for your entire IT infrastructure

- - Significantly enhanced DR posture

- - Greatly improved business agility

- - Lots more

Since the vSwitch is an integral part of the virtual infrastructure, you cannot realize these benefits without the use of vSwitches (or similar technologies). There is a learning curve - and I would encourage you to make an investment in the education of your support staff. It is an investment that will be returned many times over.

HTH,

KLC

Ken Cline

Technical Director, Virtualization

Wells Landers

TVAR Solutions, A Wells Landers Group Company

VMware Communities User Moderator

Ken Cline VMware vExpert 2009 VMware Communities User Moderator Blogging at: http://KensVirtualReality.wordpress.com/
0 Kudos
timcoote
Contributor
Contributor

Hi Ken

>This is a valid concern, and you're right - from the application administrator's point of view, it's "just a computer". There currently are no really good management tools in the native VMware application stack to address this >problem; however, there are some third party tools that are beginning to improve the visibility of the components needed to support a service - including all the virtual pieces.

It's not just the admin, it's most of the IT Service Management team (incident management, config management, change management, problem management). They all have to work out the dependencies. I know about those tools - I was a founder of one of those companies Smiley Wink

>Having only one vSwitch per pNIC would provide you with exactly what you're trying to avoid - a SPOF. When you create a vSwitch, you assign (hopefully) meaningful names. You could, for example, have a single vSwitch >that has two pNICs associated with it. You would want each pNIC connected to a different pSwitch for fault tolerance. Let us assume that this vSwitch is to be used to support the MegaUpgrade project - you could assign >the name "MegaUpgrade" to the vSwitch - this would actually simplify your administrators' job - they wouldn't have to worry about things like VLAN numbers, etc. All they need to know is that they have a MegaUpgrade >system, so they need to connect it to the MegaUpgrade vSwitch.

Not sure where the spof is: if I allocate one switch per pnic, I've got a one to one mapping of vSwitch to physical switch. I then do whatever I normally do to ensure diverse routeing with physical servers and ensure that each VM vnic is connected to two vswitches. Why would I want to have a different configuration approach for my virtual servers and my physical servers? How would I coordinate this switch naming across the hundreds of ESX servers in my estate? I try to avoid VLANs as they complicate the configuration, usually unnecessarily and catch out the unwary.

>This is simply a design concept that you're going to have to accept.

Yup. Would connecting all VMs to one vSwitch work better?

>Well, I'll have to disagree that there is no business value to a vSwitch. Without the vSwitch (or similar) technology, an ESX host would be severely limited on the number of virtual machines it could host. This would drive >down the ROI that could be realized (and drive up the TCO). Also, bringing the flexibility to reconfigure portions of your network with the click of a mouse rather than the movement of a cable is a significant business driver. >Many, many server outages are caused by a technician either moving the wrong cable or inadvertently plugging the right cable into the wrong port.

I'm sure that there is value. I just want to quantify it. All I'm seeing at the moment is cost and risk. I don't really follow your logic here, I'm afraid. And I really don't want a gui to drive it as it makes automatic backout of changes more than fragile.

>The documentation isn't going to directly address the business value of a vSwitch. I believe most of the referenced documentation is going to deal with the technology aspects of virtualization. The business value of >vSwitches is essentially the same as the business value of virtualization in general:

  • * - Increased ROI on your server and infrastructure equipment

  • - Decreased TCO for your entire IT infrastructure

  • - Significantly enhanced DR posture

  • - Greatly improved business agility

  • - Lots more

I'm sure you're right. I want to see how these pieces of value come about. There's clearly extra costs around training and maybe team size. There will be a trade off against server deployment costs and economies of scale around platform consistency, etc.

A lesson that I've learned about IT Operations is that it is full of technology that destroys value (shelfware, products deployed and not used, overlapping and even competing solutions from different technology groups). Most of this destruction comes from where technology is used in business processes that do not properly span the IT organisational silos, eg network team and server team, or server team and storage team or firewall team and application team. Technology per se does not do anything. You have to get the People and Process pieces right, too. Just because I can put in a complex switching fabric inside every ESX server doesn't mean that that I should.

>Since the vSwitch is an integral part of the virtual infrastructure, you cannot realize these benefits without the use of vSwitches (or similar technologies). There is a learning curve - and I would encourage you to make an >investment in the education of your support staff. It is an investment that will be returned many times over.

I think that your first point is a non-sequitur. That's the point that I want to bottom out. I don't see what having that extra level of configuration items in my estate gives me. I don't think that it's a common architectural necessity across all virtualisation approaches. It may well be essential to scale up an individual ESX server that I spend the time tuning the internal switch configurations for some IT service patterns.

However, at scale, I'd rather avoid the variability if I can. I can allow my desktop users to futz around with their configurations, and when the world was like that, Gartner were estimating the annual cost of supporting such a desktop as 25k usd. Smiley Happy

>HTH,

yes. thanks.

Tim

0 Kudos