VMware Cloud Community
amirpc
Contributor
Contributor

Confused trying to configure san

I have 4 vmware esx servers that I have configured into a vmware cluster and they're licensed for HA and DRS.

However, what I can't figure out how to do is properly present my iscsi san such that they can all see the same san and migrate vms quickly.

I've read the SAN documentation and the shared VMFS documentation, and I'm just more confused.

I don't know if I need to use RDM or if I just need to create a bunch of vmfs stores that are on the same san but presented via different luns, or something else entirely.

I really just want one lun that all hosts see and use to store their vm files on, so migration can occur quickly. Is this possible?

Can someone perhaps point me in the right direction here? Even which part of which documentation to consult I would consider helpful.

0 Kudos
6 Replies
RParker
Immortal
Immortal

Well on a SAN you create some volumes from a group (aggregrate on Netapp). Then on a volume, you make a LUN.

Then you need to map the LUN's to your iSCSI, so that it will be visible.

Each lun is a unique identifier (I use Fibre, so I am not exactly sure which is appropriate for iSCSI) that the ESX server will use to "see" the LUNs you created.

your iSCSI also has a unique ID that makes it unique from the other iSCSI adaptors.

Did you create the LUNS?

after you create the LUN, and ESX can see it, THEN you can create VMFS volumes on the LUN.

amirpc
Contributor
Contributor

Well I'm not quite THAT out of touch Smiley Happy

What I've done is create lun0 which I intend to be a 2TB data store for VMs.

I map lun0 in as a VMFS volume on host1, no problem.

However, if I try to add lun0 to host2 it wants to reformat it, which I take to mean it is not playing nice between the hosts sharing the lun.

Furthermore, if I try to add a VM on host2 the lun0 datastore I created on host1 shows up as single host only.

0 Kudos
acr
Champion
Champion

If you do decide to use RDM then the whole LUN will belong to the VM to which its is attached, just like a Physical LUN would be.

Also a single 2TB LUN for all VMs is not necessarily best practise, may be multiple 400 - 650G would be better, huge LUNs shared with large number of VM can impact performance, File locks etc...

If you do have to use a 2TB LUN check your Block size as it will impact how large any file size will be inside the LUN.

So you created a 2TB LUN and have introced this to ESX and formatted VMFS3. When you then create a VM and assign it a disk (Drive C:) of say 20G then a 20G File is created in the 2TB LUN. (again this isnt RDM).. Is this what your doing..??

amirpc
Contributor
Contributor

Right but what I don't understand is why the datastore that is the 2 TB lun says single host?

Basically I can make the san work with a single host but I want to know how to configure it so multiple hosts can quickly migrate vms between themselves using the san.

0 Kudos
amirpc
Contributor
Contributor

Okay so I think the problem is this.

My san is an iSCSI Enterprise Target san, with 4 failover nic bonds each one with its own private IP.

Everytime I tried to connect two servers to the same IP on the san it would allow one to connect, but not the other. This is when I discovered that iSCSI Enterprise Target has a max connections variable, but it doesn't do anything. It is stuck set at 1.

So I think, no problem I buy 4 network cards and I have each iSCSI Listening on each one (each with its own IP). I connect each one to a different IP on the iSCSI box and I san the HBA and all of my luns show up.

However, I think a side effect of this is since they are all connecting to different IPs - VMWare thinks they are each connecting to a completely different iSCSI Target.

Does this make sense to anyone?

0 Kudos
amirpc
Contributor
Contributor

I found the point of failure.

I'll give you a hint: it was me.

I wasn't scanning for new VMFS volumes in the storage adapters section. Also I wasted 200 dollars on new network cards. Oh well, atleast each host has its own 2Gbps connection to the iSCSI Target now.

0 Kudos