VMware Cloud Community
rmcs-chad
Contributor
Contributor

Newbie ESXi iSCSI Configuration Help

I just jumped into ESXi virtualization by purchasing a Dell R805 server with a Storevault (NetApp) S300 iSCSI array. The R805 has 8GB of physical memory (I'm going to bump up to 24GB soon), 2x Quad-Core Opteron processors, and 160GB of physical (internal) storage. The iSCSI array has about 1TB of storage. The S300 is a low-end iSCSI array that has 4 NIC interfaces running 1GB. Unfortunately, you can't assign different NIC's to different LUNS you create; but you CAN configure a NIC team using 2, 3 or all 4 of the NICS.

My end goal is to create 2 VM's; one running Small Business Server 2008 and the other running Terminal Server (using Server 2008). I have successfully created a working SBS2008 VM, but wanted to run my configuration past all you gurus out there to see if there are any "tweaks" that could help optimize performance or if I've overlooked something.

I created a 400GB LUN on the iSCSI array and formatted it as VMFS using the Infrastructure client. I then created a new VM which contains 3 virtual disks; 100GB (C:), 50GB (D:), 150GB (E:). These disks would create a C: SBS2008 system partition, 😧 Exchange partition, and E: Data partition. I put the C: drive on SCSI ID (0:0), the 😧 on (1:0) and the E: on (2:0) and created them using an LSI Logic controller. I allocated 6GB and 2 processors to the VM. SBS2008 is currently up-and-running.

As for networking, I have a low-end Dell Gigabit switch that supports jumbo frames and VLAN's. (I enabled jumbo frames, but read somewhere that ESXi doesn't support jumbo frames. Can anyone confirm/deny that statment?) I created 2 VLANS; one for normal network traffic between server and workstations, and the other for iSCSI traffic. Inside the VLAN for iSCSI traffic I have the iSCSI array with 2 NIC's configured as a team. The Dell server has 4 available Gigabit NIC's; one is on the "production" VLAN while the other is on the iSCSI VLAN. I also have a Windows XP computer connected to the iSCSI VLAN so that I can manage both ESXi and the iSCSI array from the same machine.

Inside ESXi I have 2 vswitches configured. vSwitch0 is configured for the VM's and is on the server/workstation subnet; vSwitch1 is configured for iSCSI traffic and is on the iSCSI subnet. I plan on adding an additional LUN to the iSCSI array for the Terminal Server and putting the associated VM on vSwitch0.

Anyone see any issues with this setup? Would I gain any performance by puting all 4 NIC's in the iSCSI array into the team? What about using another of the Dell server's NIC's in the iSCSI vSwitch? What about putting another server NIC in the production (server/workstation) vSwitch?

Thanks in advance for any assistance!

0 Kudos
1 Reply
Craig_Baltzer
Expert
Expert

HI and welcome to the forums! WRT to jumbo frame support with iSCSI, the current software iSCSI initiator in ESX does not support jumbo frames; if you want to use jumbo frames you need to use an iSCSI HBA such as the Qlogic QLA-4050 cards. The "best practice" is to keep the iSCSI traffic seperate from your production traffic, so your VLAN configuration is a good way to do that.

Putting all 4 NICs on the storage into a team likely won't get you much of anything performance wise if there is only a single path "in" to the ESXi box. http://communities.vmware.com/thread/153175 has a bit of a discussion on the subject over here and what you'd need on the iSCSI array side to support it (from the sounds of things you may be in the same boat with the S300 as the poster was with the MSA2012i). The iSCSI configuration guide () is another good source of guidance if you haven't been there already.

One thing that might be good to look at is getting a 2nd switch so that you can have a bit of network redundancy (i.e. setup multiple NICs on each of your vSwitches and split up the NICs between the physical switches).