This has been driving me nuts for 2 days. I am trying to get an EMC Unity iscsi to show up on a fedora VM so I can map an NFS folder to the EMC Unity volume, and I can only get the iscsi to connect at an ESX host level.
I have ESX server with 2x 1GBE ports connected to a network switch.
NIC 1 - internet nic plugged into switch 10.1.10.x
NIC 2 - iscsi nic plugged into same switch 10.1.80.x
iscsi lun is at 10.1.80.1
I have the Vmware Iscsi kernals setup in vsphere and I can connect to the EMC and see the datastore
If I login to the VM which is Fedora 26, I have 2 ips assigned on the nic 10.1.10.100 and 10.1.80.100.
I can ping out to 4.2.2.2
I can ping another VM on my internal 10.1.80.x
I can ping 10.1.80.2 which is my iscsi VMkernel-Iscsi01 on my vswitch which is connected to this vm and the EMC unit.
I can not ping my emc iscsi connector on 10.1.80.1 from inside fedora????
Any suggestions?
If you want to Use the 2nd Nic only for iSCSI VM then do a passthrough of the NIC.
Install the drive in the Guest.. You will keep the nic on the same LAN as the Unity iSCSI.
try and add the target from the fedora guest and check..
There is no need for the iSCSI LUN to be presented to EXi hosts in this case.
however if this configuration works, you will be dedicating one nic for Just NFS fedora.. also you cannot vMotion the VM.
I believe the Unity can also be used as a fileshare NAS along with a block level storage SAN. so why dont you run Unity in that mode and mount the NFS volumes on all hosts.
Hope Im not confusing you
I am trying to get an EMC Unity iscsi to show up on a fedora VM so I can map an NFS folder to the EMC Unity volume
Could you elaborate on this, please?
As this totaly confusing.
This makes sense but I don't have an available Nic when I hit edit settings on the vm. Maybe because I have it bound to the iscsi kernel. I'll try and unbind that in Vsphere and see if I can get it to show up inside the vm os.
It it also doesn't make sense that all these nics are plugged into the same switch, I should be able to ping the emc from an os level. I would think anyway.
I feel like an idiot, I had ports on my switch that the EMC was plugged into set to a virtual port for another subnet. I removed that added it to my 10.1.10.x internal network and it works
That is fine.. so we should be able to ping the array from the VM now and your argument is valid.
Having said that.. using the same nic for VM traffic and using the same pnic for mapping iSCSI might not be a good Idea..Im not sure if it can be done.. and even it works its a single point of failure.
For accessing the iSCSI directly into the guest use dedicated nic with passthrough.... BTW could you tell us why would like the VM to have iSCSI ..
you can always map the iSCSI to ESXi and the run the VMs on the vmfs datastore
From my understanding I can grow a partition without bringing down the os Using the EMC. I want a growable partition for my MySQL database as it grows and fills up the disk I can just add disks to the volume in EMC and not bring it down.
From my understanding I can grow a partition without bringing down the os Using the EMC
Response:
You can do that not kust from EMC, lot of other arrays also support hot extend of the LUNs presented to ESXi.. you can increase the datastore size on the fly and increase the vmdk assigned to the vm as well...
That is not a justification enough to use iSCSI mapping to the guest directly .. bypassing the hypervisor.
But when I increase the vmdk don't I have to run fdisk, pvresize, and lvextend and all that in the OS so it see's the bigger disk, that is what I am trying to prevent.
For example when I increase the disk space on the VM, I run this in the os
Up Size in VMware properties
df
fdisk -l
fdisk /dev/sda
d
2
n
p
2
<return>
<return>pv
w
reboot
pvresize /dev/sda2
pvscan
lvextend -l +100%FREE /dev/mapper/fedora-root
resize2fs /dev/mapper/fedora-root
df
or run xfs_growfs /dev/mapper/fedora-root