I am sure it will be absolutely fine to do this but I'd like to hear any thoughts on it.
We are making the switch to Netapp and NFS soon.
As per best practice, we always had our IP storage traffic seperated from the other networking.
Any reason why not to share ISCSI and NFS over the same dedicated network? Our ISCSI usage
will take a nose dive and be minimal. We don't really have the luxury to seperate them both out.
Cheers
Please consider marking my answer as "helpful" or "correct"
It's always been best practice around securing the data from snooping and isolating against faults.
As much as it's going to "sound" better to isolate the two, I can't think of a realistic reason that you can't share iSCSI and NFS.
I've seen multiple sites with iSCSI and production networks on the same segment, and it was always "a disaster waiting to happen" .. for about three years.
Edit: Not that I'm advocating that.
You can do this, but bear in mind that access to network based storage is probably your biggest bottlececk in your ESX environment - as such, having 2 sets of storage on the same NIC could lead to perfprmance problems.
Depending on how much storage you are accessing, this might / might not be a problem.
Best is to possibly set it up (if you dont have the spare NICs to separate them) and run some big data moves . . and monitor the performance.
Best practice will definitely be to keep them on 2 different NICs though
You have mentioned that you are switching to NFS from iscsi and that your iscsi traffic would drop.
If its the same amount of traffic going through that NIC, i dont think there should be a problem.
But the best way would be to run a few tests move large chunks of data and see if there are any delays or latencies.
Hope this helps!!
On Netapp you will generallh have dedicated iSCSI connection and a dedicated NFS connection. On the Vmware hosts I would use separate NICs for iSCSI and NFS but hsare the same switches. This is purely from a "doubling" the available bandwidth point of view. SHaring the same switch is not an issue.
Technically speaking, you shouldn't have any issue with it but strategically and security policies wise, it would required you to seperate any types of traffic with virtualization such as iSCSI, NFS, VMotion, SC, OOB, Management, DMZ, Backups etc...as you've known iSCSI, VMotion, NFS traffics aren't encrypted so it could be sniffable. It also depends on your workload, if you think its light use and want to take advantage of the pipe then go head test it first. But for best practice, I wouldn't mix anything together as far as networking & storages if you have a choice to seperate them.
If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!
Regards,
Stefan Nguyen
VMware vExpert 2009
iGeek Systems Inc.
VMware, Citrix, Microsoft Consultant
Hello,
There are several things to consider..... One is the amount of iSCSI trafffic. Another is the amount of NFS traffic. Testing the 'sharing' of the pNICs within ESX is crucial. You may find there is a bottle neck and you need the other pNICs. For redundancy and performance if you are doing this I would have NFS go out one NIC and iSCSI out another NIC and setup failover modes to put everything on a single pNIC as required. Note you would use VLANs for this at the very least, however subnets will also work (not desired but work).
This way each protocol gets 1GigE of throughput except in failover cases.
The other things to consider is the switch(es) involved and how many links from the Edge switch there are to the actual storage device, you may end up needing more or less there. How many switches are between your hosts and the storage devices? What are their uplink/trunk speeds? You may find your bottleneck has nothing to do with the outbound pNICs attached to the ESX host.
You need to consider the entire storage switching fabric and endpoints (ESX, SAN/NAS).
Best regards,
Edward L. Haletky VMware Communities User Moderator, VMware vExpert 2009, Virtualization Practice Analyst[/url]
Now Available: 'VMware vSphere(TM) and Virtual Infrastructure Security: Securing the Virtual Environment'[/url]
Also available 'VMWare ESX Server in the Enterprise'[/url]
[url=http://www.astroarch.com/wiki/index.php/Blog_Roll]SearchVMware Pro[/url]|Blue Gears[/url]|Top Virtualization Security Links[/url]|Virtualization Security Round Table Podcast[/url]
Hello,
There are several things to consider..... One is the amount of iSCSI trafffic. Another is the amount of NFS traffic. Testing the 'sharing' of the pNICs within ESX is crucial. You may find there is a bottle neck and you need the other pNICs. For redundancy and performance if you are doing this I would have NFS go out one NIC and iSCSI out another NIC and setup failover modes to put everything on a single pNIC as required. Note you would use VLANs for this at the very least, however subnets will also work (not desired but work).
I would say the amount of ISCSI traffic will decrease big time. We have no intention to use ISCSI for ESX on the Netapps (we will for file serving) but it will be rquired for accessing some of our other SAN (Clariion,equallogic) and it may disappear all together over time from ESX. I am definitely going to use a seperate pNIC for each protocol. Just to make sure I understand you correctly, The NFS and ISCSI NIC will each act as failover for the other protocol?
Can you elaborate a bit more on why I really should be using seperate vlans? I have been talking to networking and they appear not to be keen on seperating NFS and ISCSI networking. They don't seem to see the point of it
Maybe I can convince them with the right ammo. I have seen it recommended in many places but they don't seem to give the reasoning behind it.
The other things to consider is the switch(es) involved and how many links from the Edge switch there are to the actual storage device, you may end up needing more or less there. How many switches are between your hosts and the storage devices? What are their uplink/trunk speeds? You may find your bottleneck has nothing to do with the outbound pNICs attached to the ESX host.
We have one Cisco switch, 6500 series with several port blades, between host and storage. We will be looking at building more redundancy in there at some stage. Uplinks are 1Gb
From what I have been told the netapp (N6040) will have 4 nics, it is my understanding that one port is going to be used for replication. Two other will be teamed for the storage and this team will handle ISCSI and NFS
Hope that makes sense, I haven't seen the netapp yet.
cheers
If you are short on ports and need both iScsi and NFS running, I would consider setting up a 2 Nic vSwitch with 2 VMKernel ports, on 2 different Vlans.
You can then set up the Nics to be fault tolerant and use the second port as failover . . and specifically use Nic 1 for NFS and tell it to use Nic 2 only for failover . .then get iScsi to use Nic 2, and use NIC1 only for fault tolerance / failover.
disk read / writes and access to network storage can often become tyour bottleneck, so setting it up this way allows you to mamage the traffic from the ESX.
Thanks, that was pretty much my understanding in regards of NIC setup and failover.
I still can't completely get my head around as to why use different vlans as that has nothing to do with capacity or resilience from a networking point of view.
Then I thought that it must have to do with not having two vmkernels on the same subnet although
you can create more than one vmkernel on one subnet. So my best guess is that when I have both kernels
on the same subnet, the kernels would not be able to differentiate between NFS/ISCSI traffic and it would beat the point of having two kernels in the first place. Is that more or less correct?
cheers