Okay, I've set up my iSCSI target on a Sun X4500, one target with two 2TB LUNs. I can see this target and its LUNs fine from either one of my VMs or my Vista workstation. I'd like to set up iSCSI through VMware, however, and I'm having trouble with it. My physical servers are Sun X6220's in a 6000 chassis. Each has two NICs available to it. My plan is to use one NIC for general traffic and the other for iSCSI traffic. The default network configuration for my server uses only the first NIC, and has the management port and my two vm's on it. I've since created a new virtual switch using my other NIC and put a VMkernel port on it, using the same subnet as my X4500 storage server. I've enabled my iSCSI adapter and added the 4500 as a server. However, after multiple rescans and a couple reboots, I still can't get it to see my target. I'm not real sure what my options are at this point. I'd like to be able to check and see if my server even has basic connectivity to my 4500, but I'm not sure how. It should, by all rights, but with my current symptoms I'd like to be sure. I've seen in some online guides mention of a console that I can enter commands into, but I don't know how to get to it. Any ideas?
Hi,
Update 5 is not going to work, you need update 6 it has the fixes for using NAA.
As far as setting up the thumper have a look at http://blog.laspina.ca/roller/Ubiquitous/entry/running_zfs_over_iscsi_as
Hello,
Moved to ESXi forum.
Best regards,
Edward L. Haletky
VMware Communities User Moderator
====
Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.
SearchVMware Blog: http://itknowledgeexchange.techtarget.com/virtualization-pro/
Blue Gears Blogs - http://www.itworld.com/ and http://www.networkworld.com/community/haletky
As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization
Arrrgh... I am dealing around the same here...
How is your second nic connected to your iSCSI server? Separate switch? Same Switch and VLAN?
Jackobli: It's on the same switch, and currently with no VLAN's set up. This is a lab environment with a switch dedicated to my current project. I plan on putting storage on a separate VLAN once I get these little kinks worked out...
Mike: I'm serving it with Solaris 10 update 5 using zfs.
Could it be the vmkernel wanting to connect as root? did you set anon=0 when you created the ZFS mountpoints? Saw a good blog here
Thanks,
Neil
Could it be the vmkernel wanting to connect as root? did you set anon=0 when you created the ZFS mountpoints? Saw a good blog here
IMHO the OP talks about iSCSI. So there shouldn't be the same problems as NFS. But there are other points to look for:
Authentication for iSCSI (CHAP) set?
Any routing problem? He wrote about "the same subnet as the Storage server"...
Neil: I've looked through the blog, and I'm not sure those instructions apply in my scenario. I'm not very fluent with Solaris and ZFS administration, I'm still learning the ropes, so I could be wrong here. What I see, though, are instructions for sharing via NFS, whereas I'm using iSCSI. I don't believe the sharenfs property applies when using iSCSI, and I don't believe I can modify the way I set the shareiscsi command in any way outside of 'on' or 'off.' Let me know if I'm wrong on any of this and I'll get right in there and set what I need to set.
Jack: I've left CHAP alone, so it shouldn't be enabled. I've been able to connect with other initiators and see the target okay, so I don't think authentication is my issue.
As for the network issue, the storage server, the vm, and my workstation are all on the same switch, with no vlans, and the switch is stand-alone at the moment. I have included screenshots of my VM network configuration, my storage server's two nics and their IP's, and my VMWare host's initiator discovery screen. I believe iSCSI uses a vmkernel port, and it's best practice to use a different NIC for storage as well as have it on a different subnet, so I'm pretty sure I have that set up properly on the host in the third screenshot. The second screenshot is from my X4500 with Solaris, you can see that one of my two enabled NICs is on the .2 subnet, the same subnet as my vmkernel port that I added to handle iSCSI traffic on my host. The first screen is on my host, showing that I'm looking for targets on the .2 side of my storage server. Once I get this working I may play with port aggregation and stuff like that, but for now I'll be happy to get one NIC working with storage on each end. In the first screen you can see that I've added my storage server's second NIC as my discovery server.
I appreciate the help guys, I'm new to just about everything I'm dealing with here: iSCSI, Solaris, ZFS, VMWare, you name it. I feel good about what I've managed to learn so far, but now I know I'm in over my head with this issue. Thanks again for the responses.
Hi Trav_R, oops sorry, although I did see this article (after re-reading your problem instead of skim reading it!) Although it does say he's using ESX3.0.2
Thanks,
Neil
Hi,
Update 5 is not going to work, you need update 6 it has the fixes for using NAA.
As far as setting up the thumper have a look at http://blog.laspina.ca/roller/Ubiquitous/entry/running_zfs_over_iscsi_as
Mike: Thanks for the info, I'm in the process of upgrading to update 6, will let everyone know how it turns out.
Hi,
I also faced same problem. found solution as when you create Vkernel port, specify Gateway address also. hope this will help in your problem
-
Mike: That got me going, thanks man. It got me past my initial problem of not being able to see any targets at all, and moved me right on into another problem: I can only see the first LUN. That's a separate issue, though, so I've given you credit for the answer and I'm going to start another thread if I can't find an answer. Thanks again, and thanks to everyone else for the support.
iscsitadm list target -v
Target: nvs1
iSCSI Name: iqn.1986-03.com.sun:02:acdaa7e2-e857-6184-a288-d93064d2f440.nvs1 Connections: 0
ACL list:
TPGT list:
TPGT: 1
LUN information:
LUN: 0
GUID: 010000144fa6bdc800002a0049258fe7
VID: SUN
PID: SOLARIS
Type: disk
Size: 2.0T
Backing store: /dev/zvol/dsk/Pool2/P2V1
Status: online
LUN: 1
GUID: 010000144fa6bdc800002a00492590e3
VID: SUN
PID: SOLARIS
Type: disk
Size: 2.0T
Backing store: /dev/zvol/dsk/Pool2/P2V2
Status: online
LUN: 2
GUID: 010000144fa6bdc800002a00492590e4
VID: SUN
PID: SOLARIS
Type: disk
Size: 2.0T
Backing store: /dev/zvol/dsk/Pool2/P2V3
Status: online
LUN: 3
GUID: 010000144fa6bdc800002a00492590e5
VID: SUN
PID: SOLARIS
Type: disk
Size: 2.0T
Backing store: /dev/zvol/dsk/Pool2/P2V4
Status: online
LUN: 4
GUID: 010000144fa6bdc800002a00492590e6
VID: SUN
PID: SOLARIS
Type: disk
Size: 1.3T
Backing store: /dev/zvol/dsk/Pool2/P2V5
Status: online
Ok,
I see some potential issues.
1 The backing store should be a as follows
/dev/zvol/rdsk/Pool2/P2V1 so you need to change 'dsk' to 'rdsk'
use the following to fix it
svccfg -s iscsitgt listprop | grep backing-store
param_nvs1_0/backing-store astring /dev/zvol/dsk/Pool2/P2V1
param_nvs1_1/backing-store astring /dev/zvol/dsk/Pool2/P2V2
etc...
svccfg -s iscsitgt setprop param_nvs1_0/backing-store=/dev/zvol/rdsk/Pool2/P2V1
svccfg -s iscsitgt setprop param_nvs1_1/backing-store=/dev/zvol/rdsk/Pool2/P2V2
svcadm refresh iscsitgt
svcadm restart iscsitgt
2 You are at 2.0 TB this is not advisable for two reasons - 1 The volume is too large and will end up with scsi reservation issues (kills performance) goto 1TB max. 2 it may actually slightly exceed the max value of VMware. Set it just below 2TB if you must have that size.
3. I recommend you use acl's once you have it working, there are examples on my blog.
Thanks Mike, I'll get cracking on this and let you know how it turns out.