DarkSider's Posts

Hi, does someone know if I can install this patch if I'm running the DellEMC customized 7.0U2 version? Usually I would wait until dell releases their custom ISO/ZIP however this issue is really anno... See more...
Hi, does someone know if I can install this patch if I'm running the DellEMC customized 7.0U2 version? Usually I would wait until dell releases their custom ISO/ZIP however this issue is really annoying... thanks
Hi, due to our storage layout it is advantageous for us to have the snapshots created on HDDs and not SSDs. Mainly Snapshots are created during Backup Hours or Upgrades so the performance impa... See more...
Hi, due to our storage layout it is advantageous for us to have the snapshots created on HDDs and not SSDs. Mainly Snapshots are created during Backup Hours or Upgrades so the performance impact is acceptable. I also decided to move the swap file of some VMs to da dedicated location. VMWare lists the following options for the vmx config file: workingDir = "/vmfs/volumes/587939d7-a30f66a9-23a1-ecf4bbe1a868/snapshots/srv" snapshot.redoNotWithParent = "true" sched.swap.dir = "/vmfs/volumes/587939d7-a30f66a9-23a1-ecf4bbe1a868/swapdir/srv" The host is running ESXi 6.5 with the latest 20180502001-standard bundle. When I clicked around the VM-Options on the ESXi Webinterface I found that neat and handy config file editor where you can edit and add vmx options. I had no troubles adding those two lines: snapshot.redoNotWithParent = "true" sched.swap.dir = "/vmfs/volumes/587939d7-a30f66a9-23a1-ecf4bbe1a868/swapdir/srv" however the workingDir = "/vmfs/volumes/587939d7-a30f66a9-23a1-ecf4bbe1a868/snapshots/srv" line won't get saved or persist after applying the change. When I edit the vmx file via ssh and vi I can add the workingDir (reregister the vmx) but it still won't show in the ESXi Webinterface (however the option is persistent in the vmx file saved) Is there a reason why this specific configuration option has to be done using a manual vmx edit? Editing and reregistering is not too complicated but opening ssh etc. are just a couple of more steps instead of just punching the parameters in the webui regards, Fabian
Hi, thanks, I didn't know that the controller also disables the disk cache... you might be referring to storcli /cx/vx set pdcache=<on|off|default>. This sets the disk cache for a virtual disk... See more...
Hi, thanks, I didn't know that the controller also disables the disk cache... you might be referring to storcli /cx/vx set pdcache=<on|off|default>. This sets the disk cache for a virtual disk. It can't be enanbled for a single drive. I also created a Raid-1 with two of the HP SAS drives and enabled / disabled the cache (both times with reboots from within the LSI WebBIOS) absolutely no change in write speeds. Maybe there is another option which I did not find yet. I never understood that whole power failuer disk-/controller-/ whatever-cache issue. Typically you have clients that are connected via Ethernet to a Fileserver. That Fileserver is run as a VM on an ESXi host, which again connects to a SAN. The BBU protected cache of the Raid-Controller within the SAN unit fine. However the Fileserver might accept Data via LAN and cache within its' RAM. If the power fails know, the BBU is worth nothing. So I would invest in a good UPS and redundant server components. Regards, Fabian
Hello, my setup is a Supermicro A1SAM-2750F with a LSI 9240-8i as SAS HBA. My primary goal is to use the 24-Bay Box as a Storage NAS. Since the board is pretty capable with 8 cores and 4 NICs,... See more...
Hello, my setup is a Supermicro A1SAM-2750F with a LSI 9240-8i as SAS HBA. My primary goal is to use the 24-Bay Box as a Storage NAS. Since the board is pretty capable with 8 cores and 4 NICs, I might decide to add some other VMs later on. Currently I have 2x Samsung 840 EVO 250GB SSDs and 6 4TB WD Red drives attached directly to the LSI 9240 via SF8087 using 2 rows of the chassi's backplanes. I wanted to use the SSDs in a Raid-1 mirror configuration for the ESXi system and VM Storage. The 4 TB drives get directly mapped via RDM into the OpenMediaVault VM to build a Linux MD-Raid6. The card has the latest Firmware and I installed the latest VMWare LSI driver package, storcli etc. I set it all up and basically everything works. The 4 TB WD drives perform stellar with write speeds >100MB/s using GBit Ethernet. However the SSD-Raid-1 doesn't perform very well at all. I only get write speeds @10-15 MB/s. This is both true for write tests using the ESXi 5.5 ssh-shell and writing into the VMDK of my fileserver via the guest OS. I also tested the same setup with two HP 2.5" 10k SAS drives - same speed at 10-15 MB/s. I did this to rule out a compatibility issue with the SSDs, although they are listed as compatible for my LSI card. Off course I did some researching and googling. As it turns out the 9240 does not have any cache memory what makes the device really slow in Raid-5 configurations. Apprently this is also true for Raid-1. So I went a step further. I removed the Raid-1 configuration and booted the disk as a JBOD single drive setup. The speed however did not improve. So I deactivated the controller's BIOS which resulted in not being able to boot from the 9240-card. So I connected the SSD directly to a SATA3 port of the Intel chipset on the Mainboard. After booting the box I got really good speeds on the SSD @250MB/s from within the guest OS. The HP SAS-Drive which is still connected to the 9240 with it's BIOS disabled still only delivers 10-15MB/s. A very popular explanation is, that ESXi does no host side caching and fully relies on the controller's "write back" capabilities. Since my card has no cache it has no write back and thus the performance is bad. However the onboard SATA3 controller should suffer from the very same issue! I can't imagine that an Intel AHCI controller has built in caching... I might just order a 9261-8i which does native Raid-6 and has a cache, but I'm actually more interested in solving this strange issue... Any ideas? regards, Fabian
Hello, no the issue was not about the monolithic vmdk on the ESXi. That's ok with me because I have plenty of disk-space on my ESXi box (but it's good to know that there are ways to creat... See more...
Hello, no the issue was not about the monolithic vmdk on the ESXi. That's ok with me because I have plenty of disk-space on my ESXi box (but it's good to know that there are ways to create "growing" vmdks). What I originally observed was, that when I'm importing an appliance that has a logical size of 40gig, but the converter made it an nice and compact 1gig ovf/vmdk and im transferring it to the esxi via the VI-Client, the client tells me the estimated time to copy 40gig at the current bandwith. Actually the VI-Client just transmits the 1gig, and is done after that in like 45minutes or so, although it initally stated that the import task will take like 600 minutes (40gig @ 10mbits)... bye Darky
Hello, I'v got an ovf-appliance. The .vmdk in the ovf folder is about 1 GB. The actual virtual disk size is 40 GB - since most of it is unused, the .vmdk is quite small. When I'm try... See more...
Hello, I'v got an ovf-appliance. The .vmdk in the ovf folder is about 1 GB. The actual virtual disk size is 40 GB - since most of it is unused, the .vmdk is quite small. When I'm trying to import the appliance to my ESX 3i server everything seems to work normal. The import wizzard tell's me, that the target-disk-size is 40 GB and that it has to transfer like 980 MB or so. But when the transfer starts it just goes for like forever. The network connetion is maxed @ 10-15mbit/s and the import-progress-windows shows like 500 minutes left. (Thats about the time that you'll need to transfer 40 GB with 10-15mbit/s). Since I have to transfer another Machine (vmdk-size: 15 GB, vdisk size is 250 GB(!)) I don't want to upload the whole 250GB to my VM-Server... Is this behaviour normal or did I make a mistake? regards Darky