VMware Cloud Community
TAPufd
Contributor
Contributor

disks spinning up and down

I recently bought myself a simple USB 3.1 gen 2 external disk enclosure (4 bays; no raid; JBOD) to connect to my ESXi 7.0 host, and passthrough the disks to a VM.

The problem I experience is that when the host is running, and I connect the enclosure, the disks constantly spin up and down... I suspect something caused by power saving. If I do a LSUSB I see the disks, but it looks like they are rescanned al the time, as I see the number going up and down via the LSUSB command.

The strange thing is that when I reboot my ESXi host during the up and down spinning of my disks, the disks become stable (stay up) during the reboot process.

Once the ESXi is started up, the disks stay spinning, and are fully operational and working. After reboot the LSUSB results are also stable.

It is only after I shutdown and start the enclosure again, the problem starts again.

I have the issue on all my USB ports (3.1 gen 1 & gen 2).

When connecting the enclosure to a Windows or Linux host, everything works perfectly.

Anyone experience with this?

/J

0 Kudos
4 Replies
daphnissov
Immortal
Immortal

I would suspect it's smartd running checks against those disks. You could test by disabling it. But, for what it's worth, let me just state that what you're doing is not only unsupported but, based on experience, not a very good idea. It's stuff like that which causes people to come running back saying their VMs are blown up and helping with data recovery. Caveat emptor.

0 Kudos
TAPufd
Contributor
Contributor

First of all, thanks for your answer!

My budget is limited, so that is why I went this way.

I tried the command "/etc/init.d/smartd stop" and then started the disk enclosure, but the disks still spin up and down.

Isn't passthrough of USB devices officially supported? I specifically went that way, as it is available via the GUI, so I thought supported.

My first idea was to buy a small NAS (Synology, QNAP, ... ) but:

  • I don't like those OS'es (bugs, crappy GUIs, missing features, limited lifetime support, ...)
  • You can't snapshot/backup them in a good way
  • You are stuck with their OS, and can't switch to something else.
So I decided to buy an external enclosure that just houses the disks (4 bays) and shows those disks as separate disks (JBOD) via a fast interface supported by VMware (USB 3.1 gen 2 from ESXi 7.0), and then passthrough those disks to a VM. By this I can chose which OS I want to use to control those disks + have the possibility to backup/snapshot the OS.

/T

0 Kudos
daphnissov
Immortal
Immortal

Pass-through of *things* are generally supported, but every time someone does it it's suspect and most times should be avoided. ESXi doesn't do USB storage well, and virtual machines were not designed to be hosted across such an interface. Couple that with the fact that, by doing JBOD, you have no disk-level redundancy as well as only a single spindle for performance, and you're really rolling the dice. What level of risk you're willing to accept is up to you, but an external NAS like a Synology is a far better choice. And they're so frequently deployed in this home lab community (I myself run two different ones) and have a good enough set of features that it's the preferred way to go.

0 Kudos
TAPufd
Contributor
Contributor

I understand your point of view, but the whole idea is to have more flexibility... not being manufacturer or OS dependent.

In my case I would attach 2 of 4 JBODs, via the USB passthrough, and use Windows Storage Pools to have redundancy.

I did already some tests (when the disks are running stable), and it works like a charm. There is also the benefit that I can attach my Storage Pools to any Windows machine, import them, and I'm ready.

I agree, redundancy is definitely important, but for me the speed/iops is les of importance.

In your first post you spoke about smartd... are there any other processes that we can test with?

Br T

0 Kudos