For years I've been running my Boot Camp VM from a bootable external Hard Drive on my MacBook. This allows me to quickly jump into windows for small things, but retains the ability to boot into Windows directly for work that requires dedicated HW resources. I recently decided to upgrade from a 500GB SSD to a 1TB SSD. After cloning the drive and verifying that I was able to boot from it directly, VMware was unable to create a new Boot Camp VM. I tried to recreate the VMDK using the rawdiskcreator command in Terminal, but it failed repeatedly with the dreaded "Resource Deadlock Avoided" error. I spent days working with VMware support through email but stumbled upon the root cause and pseudo-solution after hours of troubleshooting myself - Fusion relies on the Vendor and Model names to identify the disk target in the VMDK. Since I have two Samsung T5 SSDs connected (one for Bootcamp; one for Time Machine), it can't discern between them. I've suggested to tech support that they move to something a bit more unique (like the serial number) and hope that message makes its way to the dev team. For now, the workaround is to just unplug my TimeMachine drive when I need to boot up the VM. Hopefully this helps anyone else with a soft spot for brand loyalty (I love the T5 drives) and a heterogenous-OS lifestyle.
I just wanted to say thank you for your post.
I spent a few hours this morning trying to attach two HDDs to a VM using the raw disk creator to no avail - I kept getting the resource deadlock avoided.
Your post instantly made me realise the issue - the two drives are the same make, model, size and even manufactured date!
Unplugged one, job done. I agree that the serial number could be used to differentiate.
Wow. Just wow.
I've been trying to do exactly the same thing on my Mac Pro. I have multiple drives of exactly the same make and model installed and have always received the "Unable to create the source raw disk: Resource deadlock avoided (720905)." error using vmware-rawdiskCreator. It's been having me run around in circles, until I tried creating a .vmdk of a USB device on my laptop, then copied that to the Mac Pro and edited it to match the details of the device I'm actually aiming for. Great, I successfully moved the error from the command line to the main application.
So I tried vmware-rawdiskCreator on the USB device attached to my Mac Pro and to my surprise, it worked. I'd assumed it was machine-specific in some way. So I got searching again and bumped into this post. Upon finding this out I removed all identical drives, leaving only the Windows drive of that type, and tried again. It actually worked. Obviously, completely useless, because I need the other drives attached - but stunning that a piece of software should rely on such a shonky method of identifying drives. Honestly, even I have written better code than that.
Putting one of the drives back in brings the error back, so it's completely replicable. Almost unbelievable, but thank you for finally helping me get to the bottom of this mystery.
Ok yes this is really stupid.
I was fine until I purchased another 8TB SSD to match the first one (yeah. I spent like a thousand bucks to hit this problem).
Hoping I can just unplug the other drive to make the vmdk file and then plug it back in.
Pass the cheese...
Sadly the forum is not an official support channel, a developer might have picked it up and logged this, but we won't know for sure.
Not sure if you can open a ticket as that requires a support contract and no I don't have that either.
Failing that, perhaps @Mikero can make sure this gets logged so that it can be looked into, or at the very least have a KB article about this issue?
Hm... I guess I'll just say that it's really hard to support something that Apple themselves don't support.
We can take a look at it tho, I don't think this is on anyones radar yet.
Appreciate the reply and the attention the thread is getting. I actually had a support request open initially (20111687503), but discovered the solution myself after a couple days of trying the suggested troubleshooting tips. I shared the solution with the Technical Support Engineer, and received the generic "I will share this feedback with my leads for future implementation" response, but I'm assuming it never actually made it past that email. I'd be willing to reopen if the support ticket (or create a new one) if that's an advisable route.
I am not sure this is a workaround at this point. Are you able to actually run with the two identical drives?
I unplugged the other drive that was already working in raw disk mode, then I ran raw disk creator on the new drive.
I verified that both vmdk files are accessing different physical drives and not overlapping with other drives in the system... yet now fusion is giving me the resource deadlock avoided.
I get that on fusion they really only test this with boot camp using a single drive. But it is a useful feature and essentially it's broken for this common use case.
I am not able to boot the VM with identical drive makes/models connected. For the last year I've developed a routine of unplugging my TimeCapsule disk anytime I need to launch the VM.
Unfortunately my plan was to use both these drives with ESX under fusion as I was doing successfully until I purchased the second Micron 5120 for a paltry $900. Doing the obvious math, $1800 that I can't use as planned.
Something else I discovered today that may be relevant:
Mac OS X Catalina seems inconsistent with how it maps drives to the /dev/diskX scheme, and this seems like something that could do great harm to those using raw disks. On the other hand, it may be why we have this bug.
I am using a Mac Pro with the Promise J2i "2 drive bracket". With 2 different drives in it, /dev/disk0 is an apple drive, and the promise ssd drives get /dev/disk2 and /dev/disk3
When I put the two identical drives into this bay, the New micron and it's twin drive get slotted at /dev/disk0 (replacing the apple drive) and the original drive gets /dev/disk1 (previously unslotted). The apple system drive (with active bootcamp partition) got slotted at /dev/disk2!
Given that the raw disk specification likes to address the system drives by /dev/diskX, I was suprised to see that fusion still tracked the boot camp raw disk from /dev/disk0 to /dev/disk2. I was able to boot up and use boot camp from fusion. The VMDK file is still saying the disk is at /dev/disk0; however, it also has the drive manufacturer and model besides several UUIDs (unique identifiers):
partMediaUUID, partVolumeUUID, ddb.uuid
The Micron SSDs unfortunately ONLY have the ddb.uuid. I found that they were unique:
micron-hdd.vmdk:ddb.uuid = "60 00 C2 9e 00 43 b9 27-5f 56 bc 7a 43 33 fc 7d"
micron-hdd2.vmdk:ddb.uuid = "60 00 C2 9b f5 c5 85 01-35 c0 16 2b 77 7c 89 89"
See below for the Apple drive description.
So, I think the fusion team noticed that the apple posix-based identifiers (i.e. /dev/diskX) could be inconsistent. If you wrote to a disk using direct access mode and it was the WRONG disk, it would definitely ruin your day.
Unfortunately, it looks like they chose the wrong identifiers to guarantee they were tracking the drive.
# Extent description
RW 48 FLAT "Boot Camp-pt.vmdk" 0
RW 614400 FLAT "/dev/disk0s1" 0 partitionUUID @partition:diskModel=APPLE|20SSD|20AP2048N,diskSize=2001111162880,diskVendor=,partSize=314572800,partOffset=24576,partMediaUUID=C950E08F-2AAC-43BD-8F61-F294608BA0C0,partVolumeUUID=E783267B-A4C3-3556-B751-DBED770EB996
RW 2734375000 ZERO
RW 1912 ZERO
RW 1171871744 FLAT "/dev/disk0s3" 0 partitionUUID @partition:diskModel=APPLE|20SSD|20AP2048N,diskSize=2001111162880,diskVendor=,partSize=599998332928,partOffset=1400315576320,partMediaUUID=E109530E-4188-4E5E-B889-E562FEE35EEA,partVolumeUUID=70303D82-41F9-4EF2-84D0-509E5A869554
RW 1557096 ZERO
RW 40 FLAT "Boot Camp-pt.vmdk" 48
# Extent description
RW 15002931888 FLAT "/dev/disk1" 0 partitionUUID @DiSk:diskModel=Micron_5210_MTFDDAK7T6QDE,diskSize=7681501126656
So I think this quick and dirty fix was done to avoid what happens when a new drive is added to the system and the drive specifiers get scrambled around due to some bizarre behavior in Mac OS X. Unfortunately it is not properly tracking the drives by something truly unique.
Really sorry- The support case dragged on for a long time. Months. They never said anything about a fix but I missed a message about a workaround discussion... and then they closed the case.
I am now in a different support case about a windows guest crashing only on Big Sur (works great in Catelina) due to Metal. The support engineer asked me to turn on OpenCL rendering in Fusion 12 through the VMX.
If you're with the in kids, you know that VMware removed non-metal rendering in Fusion 12. So .. so much for a qualified support staff.
I think VMware has moved on to M1 for fusion, and us recent customers are screwed.
Thank. You. So. Much.
I will now end my noble quest of tilting windmills. I've had literally the exact same experience (minus the revelation you made). I couldn't for the life of me figure out why I was unable to do this anymore. I fairly recently upgraded to two "identical" NVMes for macOS and Windows.
This would seem to be the issue as you suggested. What a _terrible_ engineering decision. Haha. As a software engineer myself, I find this laughable. Maybe use the—oh, IDK—UUID for disambiguating between two drives? Sheesh.
Thank you for sharing your revelation.
Edit: PSA: Still present in Player 12.2.1 (18811640)
About .. 7? Years ago VMware fired their entire engineering staff for Fusion and workstation and moved the effort to India. It was an overnight thing, so even if you disagree with the business of migrating the work, there could not have been a reasonable transition plan.
So... If you think about the quality of fusion lately, with this bug, fusion kind of "losing" the vmdk file in the same folder, spotty support for Metal, and a bunch of other problems you've no doubt worked around over past 7 years... do you think this is a coincidence?
I don't think it's reasonable to ask 3rd world people to solve first world problems. Having to buy a new hard drive to make your software work- Sorry but that's a 1st world problem.
Well this is incredibly angersome and WAY beyond frustrating. These punks have STILL left the problem here in version 12.2.4. I am in a dire situation where I need to map FOUR (4) of the same ZFS volumes into a Linux server on a Mac Pro that still has SEVERAL YEARS left in its life. After spending TWO DAYS (about 4 hours a day) working on this I finally found this INSANE post!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
VMware, please listen: many of us have used you for nearly two decades now. We have sung your praises high and low even when Parallels was eating your cake. You had a higher quality product. This particular situation is simply uncalled for. As a systems' software engineer that developer low level operating system functionality myself I can tell you that I know for sure that this is a ridiculous and uncalled for oversight.
There are SO MANY ways to resolve this and work around the strange limitation that some nincompoop implemented here!
PLEASE do not let VMware Fusion go into the sunset on macOS without AT LEAST resolving this imbecilic issue!!!!!!