jmacdonagh
Contributor
Contributor

Share one ESX's physical persistant storage with another ESX Server

Jump to solution

Hi all,

I have two almost identical hosts, each with ESX 3.5 installed on them. One of the hosts has a RAID controller, so we stuck in an 80GB drive and 4 identical 750GB drives. In the RAID set up, we have the 1 80 as a separate array, and the 4 750GB's as a RAID 10 array.

When we installed ESX, we created the ext3, swap, and vmcore partitions on the 80GB, and partioned the entire bigger RAID array (1.5TB) as vmfs.

This part works perfectly. The problem is, we want another host to be able to use this storage. I can't seem to get NFS to work.

First off, I have temporarily enabled --allowIncoming and --allowOutgoing on both hosts' firewalls, just to debug this issue. I also created a vmkernel port group on both servers, and given them appropriate IP addresses. On my /etc/exports on the RAID host, I have:

/vmfs/volumes/48.... *(rw) # Where 48... is the long name for storage1, you can't use the symlink

And when I try and mount it from the command line, even from the original RAID host, it fails. Permission denied. Looking /var/log/messages I see:

localhost rpc.mountd: getfh failed: Operation not permitted

I thought that maybe something is special with the way that ESX mounts the storages (since it doesn't show up in the standard mount list), so I decided to:

mount /dev/sdb1 /root/test

and then change my /etc/exports appropriatly. We still get the same error.

So, my question is, has anyone ever run into a situation like this? Does anyone know if this is possible / which steps I need to take to get it working?

Thanks,

Johann

0 Kudos
1 Solution

Accepted Solutions
Texiwill
Leadership
Leadership

Hello,

You need to edit the vmx file for openfiler by hand..... Add the following lines:

scsi2.present = "TRUE"
scsi2:1.present = "TRUE"
scsi2:1.deviceType = "scsi-passthru"
scsi2:1.fileName = "/dev/cciss/c0d1"

Where /dev/cciss/c0d1 is the device name of the unformatted LUN. Use 'fdisk -l' to determine that name. Be sure the VM is powered off when you do this.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.

CIO Virtualization Blog: http://www.cio.com/blog/index/topic/168354

As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization

--
Edward L. Haletky
vExpert XII: 2009-2020,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill

View solution in original post

0 Kudos
18 Replies
weinstein5
Immortal
Immortal

Two things - 1) it is not recommended to share a NFS mount form the service console and 2) Remember NFS is its own file system not VMFS so you would not be able to share the VMFS datastore - what you are trying to do is not possible - VMFS datastores on internal drives can not be shared -

If you find this or any other answer useful pelase consider awarding points by marking the answer correct or helpful

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
jmacdonagh
Contributor
Contributor

In the end, I'm going to put the vmkernel port group on its own vswitch with its own uplink NIC, so it won't interfere with the service console.

I think I've almost got this down. It looks like NFS does not play nice with VMFS. Since this is only for two hosts, and for a relatively small amount of space, I formatted the RAID array with ext3, mounted it on the server, and then set up the NFS export. I plan on having both servers mount via NFS.

Funny thing is, although both servers can mount it via the command line, both fail when trying to do it via Infrastructure Client. I'll have to mess with the vmkernel IP assignments tomorrow.

0 Kudos
weinstein5
Immortal
Immortal

As I said this in not recommended - I think you will find you will have intermittent connectivity to the the service console that you mounted the ext3 storage to because the network communciatio to the NFS Share will go through the Service Console port - I also think you will end up with poor performance of that ESX host and its vms - I think the best solution for what you are trying to do is build a Linux VM with a large virtual disk and use one open source packages like FreeNAS or OpenFiler and present that large disk as either iSCSI target or NAS/NFS - this way you are not forcing the service console to do something it is not meant to do and still be able to share the storage - if you do pursue your original plan make sure you increase the memory set to the service console to 800 MB and make the service console swap partition to 1.6 GB -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
Texiwill
Leadership
Leadership

Hello,

I would look at XtraVirt's XVS or Lefthand Network's VSA solutions. I would NOT use NFS as it is an unsecured protocol and you definitely do not want the SC to be an NFS server. ANother option is to create a VM with a Large Raw Disk Map that acts as an iSCSI server.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.

CIO Virtualization Blog: http://www.cio.com/blog/index/topic/168354

As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization

--
Edward L. Haletky
vExpert XII: 2009-2020,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
jmacdonagh
Contributor
Contributor

Edward,

So, I could create a VM on the host, and give it raw access to the RAID array? If I did that, how would that VM start up when the server starts up? The server would need access to the persistent storage to boot up the VM, which it can't get until the VM is booted, etc...

0 Kudos
weinstein5
Immortal
Immortal

This Vm would need to be stored on another datastore - perhaps one local -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
Texiwill
Leadership
Leadership

Hello,

You would need 2 local volumes/LUNs well 3. One for the OS, 1 for a small local VMFS (enough to hold the VM in question) and a third (no greater than 2 TBs) to which the VM can connect and then serve up the data.

If you use VSA you have some disaster recover with data copying between local LUNs.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.

CIO Virtualization Blog: http://www.cio.com/blog/index/topic/168354

As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization

--
Edward L. Haletky
vExpert XII: 2009-2020,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
TomHowarth
Leadership
Leadership

Do not use NFS, create a VMFS partition on the local storage and download and install the xtravirt XVS found here , this applicance creates a iSCSI target for your hosts to attach to. documetation on how to use it is also on the site.

If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points

Tom Howarth

VMware Communities User Moderator

Tom Howarth VCP / VCAP / vExpert
VMware Communities User Moderator
Blog: http://www.planetvm.net
Contributing author on VMware vSphere and Virtual Infrastructure Security: Securing ESX and the Virtual Environment
Contributing author on VCP VMware Certified Professional on VSphere 4 Study Guide: Exam VCP-410
jmacdonagh
Contributor
Contributor

Thanks, everyone, for your replies.

First off, these drives are SATA. Can I still do a raw device map for the xtravirt XVS? I wouldn't think iSCSI would support SATA, but maybe VMWare can emulate it.

Here's my current plan. I'm running on an extremely tight budget. If I had the money, I would have bought a proper SAN:

Server 1 (with the SATA RAID controller): install ESX onto 80GB SATA drive. Partition 1.5TB RAID 10 array into two partitions. One about 50GB, the other the rest. I'm only going to format the 50GB partition with VMFS. The other I will format as ext3, because I don't want ESX to use it as a persistent storage device.

Once the server is up, I'll set up the xtravirt XVS on the 50GB partition, and assign the other partition to it. It should be able to handle formatting it as VMFS, and set up as an iSCSI target). Then both machines can use that iSCSI target for persistent storage.

Also, is there any problem with installing VirtualCenter on a VM on that 50GB partition (that's why I plan on making it so large anyway)? Right now the only machine I have to run VirtualCenter is an old Celeron with 512MB RAM. It'll only be handling 2 hosts, but I'd like to give it more processing power and more RAM in a VM.

Thanks!

Johann

0 Kudos
jmacdonagh
Contributor
Contributor

Scratch that. I'm actually going to use OpenFiler. I only need to have the iSCSI target available from one machine. No replication.

On top of that, I'm going to assign the whole RAID array via raw device mapping to the Openfiler VM. That means I'm going to have to find another SATA drive here somewhere...

0 Kudos
jmacdonagh
Contributor
Contributor

My plan ran into a hitch. I'm going to try to take another whack at it tomorrow and see what happens. Basically, this is what I did. I'm trying to get this working on the first server, and then I'll move to hooking up the second.

I set up two raid 0 arrays from the 80GB drive. One would act as the local disk for the ESX host, the other would act as a small VMFS for Virtual Center and Openfiler. I also created a RAID 10 array with the 4 750GB disks. ESX booted up smoothly and I had about 40GB (from the 80GB) to install VirtualCenter and Openfiler VMs. The problem is, I wasn't able to assign the large RAID 10 array as a raw disk to the Openfiler VM. Right now it's completely unpartitioned. It shows up as /dev/sdc, and running fdisk confirms it has no partition table. When I went to add a new drive to the Openfiler VM, the raw disk option was greyed out. Do I need to partition the large RAID array first? Do I only assign a partition as a "raw disk"? Or, do these disks literally have to be SCSI disks? They're SATA.

EDIT: Oh, another issue. If I have to reboot this server, it obviously won't be able to reconnect the iSCSI storage device until that VM is powered on. Will ESX continue to try connecting (since, chances are, ESX will initially try and reconnect to it before the Openfiler VM is up and running)?

Anyway, I plan on trying out a few ideas tomorrow. I just thought I'd post this and maybe wake up tomorrow to an answer Smiley Wink

Thanks!

Johann

0 Kudos
Texiwill
Leadership
Leadership

Hello,

You need to edit the vmx file for openfiler by hand..... Add the following lines:

scsi2.present = "TRUE"
scsi2:1.present = "TRUE"
scsi2:1.deviceType = "scsi-passthru"
scsi2:1.fileName = "/dev/cciss/c0d1"

Where /dev/cciss/c0d1 is the device name of the unformatted LUN. Use 'fdisk -l' to determine that name. Be sure the VM is powered off when you do this.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.

CIO Virtualization Blog: http://www.cio.com/blog/index/topic/168354

As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization

--
Edward L. Haletky
vExpert XII: 2009-2020,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill

View solution in original post

0 Kudos
jmacdonagh
Contributor
Contributor

Thanks for the reply, but that didn't work.

fdisk -l listed three devices with these partitions:

/dev/sda

/dev/sda1 - ext3 # where I have ESX installed

/dev/sda2 - swap

/dev/sda3 - vmcore

/dev/sdb

/dev/sdb1 - VMFS

/dev/sdc

And /dev/sdc is the large RAID array. I edited the Openfiler.vmx file and added this towards the end:

scsi1.present = "true"

scsi1:0.present = "true"

scsi1:0.deviceType = "scsi-passthru"

scsi1:0.fileName = "/dev/sdc"

When I went to see the settings in infrastrcture client, it showed the new device but said that the host device was "unavailable". I have since tried partitioning the disk with one VMFS partition (no formatting), and changing it to /dev/sdc1. Same thing. I have also tried /dev/cciss/c2d0 and /dev/ccissc0d2. It always says the device is unavailable in the settings, and when I power on, it tells me that the operation is not permitted.

Should this work with SATA arrays?

0 Kudos
Texiwill
Leadership
Leadership

Hello,

Once the LUN is mapped you boot the VM and then use something inside the VM to format the partition. If you are sharing this out with iSCSI, then you have to tell openfiler to allow the access, etc. /dev/sdc is the LUN to use.

I Have done this a few times when I did not have a SAN and it did work. Does the openfiler VM see the drive at all? Remember even so there is a 2TB limit.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.

CIO Virtualization Blog: http://www.cio.com/blog/index/topic/168354

As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization

--
Edward L. Haletky
vExpert XII: 2009-2020,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
jmacdonagh
Contributor
Contributor

I can't even boot up the VM. The specific error is "Unable to perform this operation in its current state (Powered Off)" (yes, I get that error when trying to power it on Smiley Wink ). Again, when I head into the VMs settings, I see the new SCSI device there, but it says /dev/sdc is not available.

Should I hook that SCSI device up to the existing SCSI controller? The default one is a LsiLogic one, but when I added those few lines, it looks like it added a BusLogic one automatically.

Again, the VM won't even boot up Smiley Sad

0 Kudos
jmacdonagh
Contributor
Contributor

Ah ha! I think I have it!

I had to create a raw disk .vmdk using vmktools. I created one for /vmfs/devices/vhba0:2:0:0 (not sure why I couldn't just do vhba0:2:0). After that, the Openfiler VM could see the disk perfectly. As I partitioned it in the VM, I could see the changes being written to the actual /dev/sdc on the host machine.

Sorry for the lack of specifics. I've messed with the servers so much that I'm doing a clean re-install now.

Thanks again for everyone's help!

Johann

0 Kudos
Texiwill
Leadership
Leadership

Hello,

The last :smileyshocked: is the partition number to use for the raw disk. That is definitely required. It does not like referring to the entire LUN for some reason.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.

CIO Virtualization Blog: http://www.cio.com/blog/index/topic/168354

As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization

--
Edward L. Haletky
vExpert XII: 2009-2020,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
ddcSupport
Contributor
Contributor

What command did you use to create the RDM? I've tried -r -z, etc. and can't get anything more than ~750GB. Thanks. Robert

0 Kudos