VMware Cloud Community
bytesector
Contributor
Contributor

Existing RAID array integrated into virtual machine

Hi everyone,

I am extremely new to ESXi, but I've been using VMWare Workstation for a year now. One of the things I loved in Workstation is the ability to attach an existing physical hard drive to a VM session.

I would like to implement ESXi in my environment but I have a problem. I have an Adaptec 52445 RAID controller with an populated RAID 6 array (with one large NTFS partition). I would like to know if its possible to attach this existing array to a VM in ESXi without any data loss? And if it is possible, how do I go about it?

At the moment, this is the one hang up I have about implementing ESXi in my environment. If I could get this straightened away, I would have ESXi out of the lab in a jiffy.

Thanks in advance.

Chris.

0 Kudos
9 Replies
Rumple
Virtuoso
Virtuoso

Couple limitations:

Drive must be smaller then a 2TB-512 bytes

You would have to attach it as a RDM (Raw disk mount)

0 Kudos
PaulSvirin
Expert
Expert

For ESXi 3.5 there were some limitations: http://communities.vmware.com/message/1140618

Here is also a comment about ESXi:

---

Paul Svirin

StarWind Software developer ( http://www.starwindsoftware.com )

--- iSCSI SAN software http://www.starwindsoftware.com
DSTAVERT
Immortal
Immortal

It may not be practical for what you want to do. ESX(i) does not support NTFS as a datastore. You might be able to use VMdirectpath to connect the NTFS volume to a Virtual machine.

You could perhaps set up the current machine as an NFS datastore and configure another machine to run ESXi.

-- David -- VMware Communities Moderator
0 Kudos
bytesector
Contributor
Contributor

Thanks for the replies.

@Rumple: Unfortunately I am trying to attach arrays ranging from 4-10TB. Is there another VMWare product (ESX maybe?) that handles an array above that 2TB mark?

@PaulSvirin: I took a read through the ESXi 3.5 link there and it looks promising. i will take a look at my set up tonight and see if there is a viable solution. I had come across similar things previously without much implementation luck.

@DSTAVERT: I'm not aiming to use an NTFS volume as a datastore. Rather, I would like to link an NTFS volume inside a virtual machine - a physical drive with an NTFS partition as a volume in a virtual machine.

Thanks guys. I will keep this thread updated on whether your suggestions work or not.

0 Kudos
DSTAVERT
Immortal
Immortal

If your hardware supports it look at the VMdirectpath link. In either passthrough or directpath you will be limited to one controller one one virtual machine. If you have multiple arrays on a single controller and want to pass them through to multiple VMs then it won't work.

-- David -- VMware Communities Moderator
0 Kudos
bytesector
Contributor
Contributor

Ok, finally an update. I tried creating a rdm VMDK file using the "ls -al /vmfs/devices/disks/" and "vmkfstools -r /vmfs/devices/disks/vml.x /vmfs/volumes/.vmdk" method but received the error:

Failed to create virtual disk. The destination file system does not support large files (12).

I was trying to link in a 4TB disk with a full 4TB NTFS partition. I'm guessing it wont work.

What I am trying to achieve is this: currently, I have my file server that I would like to virtualize and bring in some other systems. My file server has two RAID arrays running off an Adaptec 52445 (a supported controller). One array is 4TB and the other is running about 7.5TB.

Is there a way to bring those drives (with their existing data) into a virtual machine in ESXi? If not, and i understand thats a very likely possibility, is there another virtualization platform (VMWare or otherwise) that will do this for me? I know it can likely be achieved if I ran VMWare Server or Workstation or even Hyper-V on a Windows platform, but I really like the idea of cutting a Windows host platform out of the mix.

Please help. If for no other reason than to tell me it can't be done. Thanks in advance!

0 Kudos
Rumple
Virtuoso
Virtuoso

the limit for vmfs and RDM volumes is 2tb - 512 Bytes. There are a couple ways to do what you are accomplishing:

One is to create multiple 1.99TB Volumes and use Extents to create one big volume (which can be as much as 64TB). you format this with 8MB cluster sizes and then create the appropriate sized VMDK on it and migrate your data (although I think you still hit the 2TB limit). Using Robocopy you can keep the permissions and do an incremental copy over a month or something until all data is in sync. By disabling strict name checking you could then have the new server allow users to map using old DNS cname. You could also implement DFS replication as well, although thats usually harder to implement on existing configurations.

Extents are generally discouraged as you can never remove the disk later without wiping the entire vmfs volume. Personally I think of Extents like Microsoft Dynamic disks...sure it can be done, but its probably not a good idea...

Another option (and this I generally recommend as its better for backups) is to tray and split up your data into different volumes so you can have multiple 2 TB volumes. The biggest advantage to this is that when you have backup software that uses multiple streams (like hp data protector or TSM), you can put 5 different 2TB volumes in a single backup job and they will all backup at once, vs a single job only being able to use a single thread against a single volume. Cuts down backups times dramatically...

Obviously this all depends on you file system and share layouts.

If you have the capabilities to turn that other server into an ISCSI target you could also use the ISCSI initiator right in the VM operating system to mount that disk as a raw volume...of course it would be better sitting on a proper SAN..but beggers can't be choosers sometimes 🐵

0 Kudos
DSTAVERT
Immortal
Immortal

I'll try this again. Have a look at VMDirectpath. If your hardware supports it the VM can have direct access to the controller.

-- David -- VMware Communities Moderator
0 Kudos