we are using esx v3.5 update1 and have an emc CDL-300 that we use to assign VTL's to physical machines. We are moving all these physical ones to the esx box and wish to assign the VTL's to these VM's. The biggest issue we are facing is that the ESX host itsef fails to detect the VTL's (so obviously we cant assign any to the VM's). has anyone succesfully used a CDL300 with ESX 3.5 up.1? If yes then please assist. We dont have any kind of zoning, the default is all/open access. We are able to assign LUN's from an HDS (Hitachi) array and the esx host sees them just fine. But we are unable to have the esx host detect VTL's. Any help will be much appreciated.
I was told that VMwre officially does not support TL's (virtual or physical) via fiber, is this true?
in trying to fix the issue, I found that the vi client shows the hba' as QLA2432 whereas they are 2462's. I even removed one of the hba's and put in a 2460 (and I an sure this is a 2460) but dont see any sign of a solution. Still no TL's being detected. The Qlogic BIOS at startup perfectly shows me that the VTL has been recognised on a id/lun no. but I still cant find it under configuration -> storage adapters. Has no one else faced this issue?
Regarding the model of the Qlogic adapters, i would not worry - if they are listed as vmhbas the VMkernel has successfully loaded a driver for them. The model reported is probably chipset and not necessarily the actual vendor model number - as the same chips are used in many adapters. A dual-port HBA will show up as two single-port HBAs, for instance.
Your main problem is probably that vmkernel only understands "disk" type devices, and will not touch a VTL or other non-disk SCSI device.
However, although you may have good reasons for initiating this project, a backup server is really not an ideal candidate for virtualization. Typically, these systems are the most disk- and network-intensive applications of all, and tend to be the worst-case workload for virtualization regardless of hypervisor platform.
If your reasons for virtualizing your backup system are very compelling, you may be able to achieve what you want by using NPIV and assigning a virtual WWN to your virtual backup hosts. This is not something I've ever tried personally, but you may be able to present your VTLs directly to your virtual machines using NPIV.
Lastly, I may have completely misunderstood what you're trying to achieve, in which case I apologize
thanks oschistad, your response is appreciated. the real reason for doing this is because we run trainings for a few backup products, the VM's thus created wont exactly be for production. here's the double whammy, I rolled back (well not really rolled back but simply reinstalled) to 3.0.1 and I can everything like expected. I assigned 10 VTL's to the esx hba and I see 10 medium changers and resp. drives. What has changed in 3.5 that is causing such a major difference in the way the vmkernel sees non-disk SCSI devices? I am also interested to find out whether this behaviour of 3.5 is specific to EMC vtl's or does it do the same for others.
Hmm, now that is interesting indeed. I do know that VMware did a major redesign of their storage stack going to 3.5, and as a consequence all storage had to be recertified.
An interesting experiment to perform, if you have the time and bother, might be to test again using a 3.5 server and check out what messages get printed in the vmkernel log while performing a rescan after you present the VTLs back to the ESX.You should at least see it have a peek at each VTL LUN, and probably something along the lines of "Could not get disk id for vmhba1:1:N"..
Won't solve anything of course, but might provide an interesting pointer towards what is going on perhaps..
I have the same problem with dell power vault 132T on fiber channel....