First ever post, just trying to report a bug as directed by Vmware (Defect Report | United States).
Ran across an interesting bug in my home lab. I have a single host on which I am currently running ESXi 5.1 in which I have SSD's, and 3TB spinning disk. Because I want to flexible in my use of the 3TB drives, using them both as NAS and VM storage I assign them to a storage appliance VM which caves them up using ZFS. I have had no problems with my current setup except the the appliance I currently use has a well known bug which causes high idle cpu ussage if given more than one cpu. So I have been testing other ZFS alternatives and in doing so I ran across an interesting bug.
BUG: If you assign an RDM to a VM running Solaris 11.1 and boot it, ESXi will purple screen and bring everything down within a 2 or 3 minutes. The only solution is to hard power off and power back on. It seems to me that no matter what a VM should not be able to bring down a hypervisor so I thought I would report it.
Supporting documentation, I have found a blog (VMware vSphere ESXi 5.1 and Solaris 11.1 with RDM; do not do this) and a forum post (third post down, post) by others who have experienced this issue.
Hardware (different than the others are running so I think this is an ESXi issue): AMD 8 core 8120, 32 GB ram, 3TB Seagate SATA 6Gbs 1TB per platter drives (model ST3000...), ASUS motherboard
Feel free to comment, hopefully this gets to someone from Vmware who can check it out.