VMware Edu & Cert Community
RadarG
Enthusiast
Enthusiast
Jump to solution

Design using NetApp best practices

I'm preparing for my VCP5 and I'm reading the new Scott Lowe book. the book describes how traffic should be isolated. your vMotion, vmkernal, etc. but in a lot of organizations I see the NetApps with luns a few for datastores and a few luns for CIFS shares. I suppose you can have your vmMotion on a separate VLAN, but wouldn't it be more secure to just setup a windows file server  VM to host your files there? In the freenas and openfiler forums they stress not to run their software in VMs in a production environment.   Wouldn't physical separation be better then just a VLAN? I was inking and correct me if I'm wrong. I think hosting the CIFS shares in a VM would allow for the SAN, vMotion, vmkernal, to be more reliable if you had redundant switches on both sides of the VMware hosts. So if your core switches drop your vmware environment will not drop.

0 Kudos
1 Solution

Accepted Solutions
JoshuaAndrewsVM
Jump to solution

> traffic should be isolated.

Yes, network traffic should be split up on separate networks for various reasons including performance and security.

>NetApps with luns a few for datastores and a few luns for CIFS shares.


Yes, If you have a NetApp filer you can server both FCP or iSCSI block-level storage or CIFS or NFS file-level storage.


>I suppose you can have your vmMotion on a separate VLAN, but wouldn't it be more secure to just setup a windows file server  VM to host your files there?


Ok, you lost me.  Yes, you should separate vMotion traffic both to improve performance and because vMotion traffic isn't encrypted.

I don't see where you are going from vMotion to a Windows file server?


However, if you are referencing why would you server CIFS from your NetApp instead of Windows:

You don't need to patch and reboot the NetApp at least once a month.

Performance is better

You don't need to buy a Windows license and then maintain Windows

Snapshots.  NetApp has the best snapshots in the business.  When your Windows box hits high I/O, or just because it's Tuesday and drops all of your VSS snapshots you'll really wish you had a NetApp.


>In the freenas and openfiler forums they stress not to run their software in VMs in a production environment.  

Note that there are a ton of storage appliances out there that run as VMs and server up NFS for shared storage including LeftHand and they've been stable for years.


>Wouldn't physical separation be better then just a VLAN?

Yes, if you have the infrastructure.  Tho this is the first time I've seen you reference VLANs?  Are you now talking about the NetApps like the 2020 series with two NICs where you need to run all traffic (managent, CIFS and iSCSI) accross them via VLAN?

Like this: http://sostechblog.com/2012/01/08/netapp-fas2xxx-fas3xxx-2-nic-ethernet-scheme/


>I was inking and correct me if I'm wrong. I think hosting the CIFS shares in a VM would allow for the SAN, vMotion, vmkernal, to be more reliable


CIFS has nothting to do with SAN, vMotion or VMkernel.  CIFS (SMB) is the file sharing protocol used primarily by Windows


>if you had redundant switches on both sides of the VMware hosts. So if your core switches drop your vmware environment will not drop.


You always want redundant switches.  No single point of failure is the best practice.


View solution in original post

0 Kudos
5 Replies
weinstein5
Immortal
Immortal
Jump to solution

You have packed a lot into your post - segmenting of network traffic is best practice because your will want ensure access across the management network your ESXi anad vCenetr hosts, that there is sufficient bandwidth for VMs, vMotion and IP based storage is not impacted and you are correct in that redundant switches will add to the reliability of the virtual environment.

vMotion traffic should be isolated for number of reasons - primarily you want to limit any latency as a high network latency can negatively impact vmotion performance and reliability in addition the traffic carried across the vmotion network is also not encrypted. vmotion traffic also will onty have any impact on your storage traffic whether NAS/NFS or iSCSI.

There are features that NetApp provide that might be useful in support of Windows CIFS shares - but you are correct you do gain something by running CIFS shares in a virtualized file server -  the ability to take advantage of VMware HA for availability and VMware DRS  to insure sufficient resources are being delivered to the VM.

I have also moved this to the VCP forum as this is a question for your VCP preparation -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
JoshuaAndrewsVM
Jump to solution

> traffic should be isolated.

Yes, network traffic should be split up on separate networks for various reasons including performance and security.

>NetApps with luns a few for datastores and a few luns for CIFS shares.


Yes, If you have a NetApp filer you can server both FCP or iSCSI block-level storage or CIFS or NFS file-level storage.


>I suppose you can have your vmMotion on a separate VLAN, but wouldn't it be more secure to just setup a windows file server  VM to host your files there?


Ok, you lost me.  Yes, you should separate vMotion traffic both to improve performance and because vMotion traffic isn't encrypted.

I don't see where you are going from vMotion to a Windows file server?


However, if you are referencing why would you server CIFS from your NetApp instead of Windows:

You don't need to patch and reboot the NetApp at least once a month.

Performance is better

You don't need to buy a Windows license and then maintain Windows

Snapshots.  NetApp has the best snapshots in the business.  When your Windows box hits high I/O, or just because it's Tuesday and drops all of your VSS snapshots you'll really wish you had a NetApp.


>In the freenas and openfiler forums they stress not to run their software in VMs in a production environment.  

Note that there are a ton of storage appliances out there that run as VMs and server up NFS for shared storage including LeftHand and they've been stable for years.


>Wouldn't physical separation be better then just a VLAN?

Yes, if you have the infrastructure.  Tho this is the first time I've seen you reference VLANs?  Are you now talking about the NetApps like the 2020 series with two NICs where you need to run all traffic (managent, CIFS and iSCSI) accross them via VLAN?

Like this: http://sostechblog.com/2012/01/08/netapp-fas2xxx-fas3xxx-2-nic-ethernet-scheme/


>I was inking and correct me if I'm wrong. I think hosting the CIFS shares in a VM would allow for the SAN, vMotion, vmkernal, to be more reliable


CIFS has nothting to do with SAN, vMotion or VMkernel.  CIFS (SMB) is the file sharing protocol used primarily by Windows


>if you had redundant switches on both sides of the VMware hosts. So if your core switches drop your vmware environment will not drop.


You always want redundant switches.  No single point of failure is the best practice.


0 Kudos
RadarG
Enthusiast
Enthusiast
Jump to solution

The fas2020s are the only NetApps that I have been learn from. It just doesn't seem like a great idea to have user traffic reach back to your NetApp. I thought one of the main points of a good vmware network is that your services never drop. Does NetApp have guides geared towards VMware?

0 Kudos
JoshuaAndrewsVM
Jump to solution

A properly configured 2020 (see my link above) has no (simple*) single point of failure.  Yes, VMDK (either NFS or iSCSI) storage traffic uses the same two paired NICs that host the CIFS or NFS traffic for guest file sharing, plus management for the Filer, but vMotion/FT/redundant HA/vSphere/virtual machine guest networking is all handled by NICs on the host and has nothing to do with the NetApp.

While having storage traffic and CIFS share the same two NICs is not ideal, the 2020 is intended for small environments.  The configuration above was for a 120 user network incl multiple SQL servers,  a file management service and Exchange.   Everything was virtualized on the 2020, all file sharing was NetApp CIFS (3+TB).

The only reason they upgraded to the 3x20 was because they went to VDI (View 4.5) and started encountering poor performance when they hit 60 desktops.  Actually you could see the performance start to peak around 40, but we juggled a little and it was still faster than their old desktops so no one complained.

Which is simply amazing for a storage solution that cost about $40k including the 4GB FCP tape drive (connect it straight to the 2020 and watch the backups fly.  Love me some NDMP)

Wait, is my NetApp flag showing?  Smiley Wink  Did I mention DeDupe?  Or that the admin took a snapshot of the LUN holding the Exchange server and forgot about it for over a year.  Only 500GB snapshot (see: dedupe above) which took about .01seconds to remove when we noticed it. And no performance difference.  Now I'm getting all teary eyed thinking of that poor 2020 that had to be let go since it wouldn't run ONTAP 8.

*SPoF: Same chassis, same OS, there are theoretically some things that could take both down, and unless you have  a fancy chassis switch that support LACP between blades you'll have only fail-over NICs, tho that isn't a SPoF.  Along the same lines you can set CIFS to default to one NIC and storage to the other so each will have a full NIC of bandwidth.

0 Kudos
Gav0
Hot Shot
Hot Shot
Jump to solution

RadarG wrote:

The fas2020s are the only NetApps that I have been learn from. It just doesn't seem like a great idea to have user traffic reach back to your NetApp. I thought one of the main points of a good vmware network is that your services never drop. Does NetApp have guides geared towards VMware?

TR-3749 is an excellent document:

NetApp Storage Best Practices for VMware vSphere

Please award points to your peers for any correct or helpful answers
0 Kudos