StudentJT
Contributor
Contributor

Networking Design....Help needed cont.

Jump to solution

I remade this thread to award points. Here is the original:

http://communities.vmware.com/message/1515861#1515861

0 Kudos
1 Solution

Accepted Solutions
kac2
Expert
Expert

1st. vMotion and HA will not work if you have VMs stored on the local server's hard drive. This is why you HAVE to use a SAN. Centralized storage allows all the servers to see the VMs, therefore being able to move VMs from server to server is possible (iSCSI/NFS/FC/FCoE). You can't have HA or vMotion if VMs are sitting on the local server.

2. Don't shoot yourself in the foot and try to run production on 2 NICs. you're going to end up with people yelling at you about performance. Get a detailed plan made out and follow it carefully. Get the correct hardware and infrastructure in place first or people will not want to embrace virtualization

Will your setup work? Yes. Will it be at optimal performance? No. You need to invest in some more NICs (6 physical NICS per ESX host is a minimum IMO, i usually go with 10. There fore, you have levels of redundancy and less liklihood of bottle neck.

If you can't do anything else and HAVE to use 2 NICs, I would honeslty think about keeping everything on 1 vSwitch and tagging VLANs at the vSphere Port Group layer. This way you can use both NICs for all traffic. Not best practice.

View solution in original post

0 Kudos
7 Replies
StudentJT
Contributor
Contributor

How many physical hosts do you have?

3 ESX Hosts, 1 w2k8 box running VI client

how many physical NICs do you have on each host? 2 per host. Only one is used as I have been told.

Do you have a SAN or are you doing local storage?

I would like to set up one NAS using a linux box and NIS and sharing a huge lun drive as a data store. The hosts, however, each have 500gb drives.

****************************************************

I would like to state that I really appreciate any help and forebearance and patience. I am stuck in the middle of this project with only so much access. We have a new IT admin here and the old one did not leave any documentation of our current configurations and I don't have access to go looking around so I have to work with the new admin.

They do however want to get this VM infrastructure into a large scale prod setup and I can get the information that is needed. So please if I am omitting something crucial let me know and I will track it down.

Thank you!

0 Kudos
kac2
Expert
Expert

Hmmm...

what type of licensing do you have again?

so you have 3 hosts w/ only 2 NICs in each? You can set your environment up with 2 NICs, but I wouldn't. It's not an enterprise solution. I would think about investing in some more NIC cards to give you a bit of redundancy and less chance of bottle neck. Only using 1 NIC for ESX? how in the world have you gotten this far? Smiley Happy

In the meantime, you can read this blog post I did about setting up all features using only 2 NICs. *CAUTION* i do not condone using only 2 NICs in production, you're going to be starving for more resources.

http://kendrickcoleman.com/index.php?/Tech-Blog/creating-a-vsphere-host-w-full-capabilites-and-only-...

Do your current VMs live on local storage or a SAN? To be able to take advantage of features such as HA, vMotion, etc, you have to have a SAN/NAS. Performance of this SAN/NAS all depends on the type of drives. You will want to do some performance testing before going porduction because when I used FreeNAS it had bad performance, you can give OpenFiler a try, but again, this is a free solution and not suggested for production workloads, IMO

StudentJT
Contributor
Contributor

Well.... your understanding my predicament

The problem is that my company has been growing pretty rapidly in the last few years. The current vmware solution is very basic because at the time, it was only use for rapid deployment for QA purposes as we are a software development company. Now we need to get many other departments online, including departments in offices around the world. Hence, in the wake of our last admin leaving, I was given the task.

So to answer questions, we can't go the SAN route yet, though I am fighting for it, but I am going to throw a 2950 together with some 10k speed, SAS, 3G drives to create a NIS system.

I want to use VMotion when this is setup, so we are going to get the advanced suit and keep our existing licenses for other small system uses.

I whole heartedly agree with you about the NIC setup so I am going to propose it to my new admin.

So I was wondering if the following theory would be sound once I have a proper understanding of the hard ware available, (this is very generic):

vmnic0 vSwitch0 Service

Console + vMotion + VMKernel (IP Storage)

vmnic1 vSwitch0 VM Network

This would be per host setup. I am going to find out what switches I have access too so that I could have switch redundancy. I understand that this will definetly decrease performance for vmotion but will it cause a problem?

thanks!

0 Kudos
kac2
Expert
Expert

1st. vMotion and HA will not work if you have VMs stored on the local server's hard drive. This is why you HAVE to use a SAN. Centralized storage allows all the servers to see the VMs, therefore being able to move VMs from server to server is possible (iSCSI/NFS/FC/FCoE). You can't have HA or vMotion if VMs are sitting on the local server.

2. Don't shoot yourself in the foot and try to run production on 2 NICs. you're going to end up with people yelling at you about performance. Get a detailed plan made out and follow it carefully. Get the correct hardware and infrastructure in place first or people will not want to embrace virtualization

Will your setup work? Yes. Will it be at optimal performance? No. You need to invest in some more NICs (6 physical NICS per ESX host is a minimum IMO, i usually go with 10. There fore, you have levels of redundancy and less liklihood of bottle neck.

If you can't do anything else and HAVE to use 2 NICs, I would honeslty think about keeping everything on 1 vSwitch and tagging VLANs at the vSphere Port Group layer. This way you can use both NICs for all traffic. Not best practice.

View solution in original post

0 Kudos
StudentJT
Contributor
Contributor

Thanks. I agree with your reasoning and this is why I posted as I am trying to plan this out. In the interim between our messages, new information came to my attention when I went to ask about getting certain hardware for setup, so I can now rethink strategy before planning this. You have been really helpful. Thank you.

I just want to make one thing sure though, a SAN is require, period, for VMotion. I cant use a local storage drive on a NIS server or NAS for a Datastore? As in, just use vmotion to move VMs between host resource pools(cpu, ram) while having the actual VM Disk files on the NAS?

0 Kudos
kac2
Expert
Expert

SAN, NAS, call it whatever you want. But you have to have "centralized" storage for vMotion and HA to work.

All 3 of your servers must be able to see storage where a VM is located for it to move between physical servers. vMotion is used move a Vm from one physical host to another. If VM1 is living on the local storage of a physical hostA, How can physical hostB access the storage to move the VM? It can't, therefor you need a SAN.

this is a HORRIBLE but quick MS Paint job of an understanding

StudentJT
Contributor
Contributor

Cool, so I was understanding correctly. I want the VM to physically live on a NAS while simply using vmware to allocate it resources. Thanks, your diagram is what I was thinking.

Thanks and good luck in your endevors!

0 Kudos