VMware Cloud Community
jonretting
Enthusiast
Enthusiast

My Lab VSAN cluster is exceeding my expections!


I posted the article on my blog as well Lowjax with some images...

The cluster consists of four hosts contributing to VSAN.

Software:
  • VMware ESXi 6.0.0, 2715440
  • VCSA 6.0
  • VDP 6.0
  • VSAN 6.0
  • VRLI 6.0
  • intel-nvme vib: 1.0e.1.1-1OEM
  • scsi-mpt2sas vib: 20.00.00.00.1vmw-1OEM
  • net-igb vib: 5.2.10-1OEM
  • LSI 2308 P20 Firmware (IT/Pass through)

Core:

  • Supermicro 1U Short Chassis (CSE-512L-260B)
  • Supermicro Xeon UP UATX Server Motherboard (X10SL7-F)
  • Intel(R) Xeon(R) CPU E3-1230 v3 @ 3.30GHz (BX80646E31230V3)
  • (2x) Crucial 16GB Kit for 32GB Memory (CT2KIT102472BD160B)

Power:

  • HDPLEX 160W DC-ATX Power Supply
  • Mini-Box Y-PWR, Hot Swap, Load Sharing Controller
  • (2x) HDPLEX Internal 80W AC Power Adapter (160W peak with failover)

Cooling:

  • Dynatron Copper Heatsink K129 (1U CPU)
  • Enzotech BGA Copper heatsinks (northbridge)
  • SuperMicro Mylar Air Shroud

Storage:

  • Avago (LSI) 2308 SAS2 on-board HBA Controller
  • Samsung 850 Pro 128GB SSD (MZ-7KE128BW)
  • Intel 750 Series 400GB PCIe NVMe 3.0 SSD (SSDPEDMW400G4R5)
  • (3x) Seagate 1TB Enterprise Capacity HDD SAS (ST1000NM0023)

Networking:

  • Intel i210 on-board NIC with 2x 1Gbe
  • Intel 10Gbe Converged Ethernet Network Adapter (X540T1)

I am still trying to somehow slow the IOPs down with workloads, missing hosts, component syncs, client IOPs benchmarking, and simulated user events across multiple clients. There seems to be nothing i can do to slow things down at the storage layer. Even ten VDP backups running simultaneously, while doing the stuff listed above doesn’t have any affect. My latencies remain between 0.1ms and 1.2ms.

A Windows client with a two strips storage policy reads sequentially at 1000MB/s – 1700MB/s and writes 600MB/s – 1300MB/s. The same client performs 50K – 60k Random 4k Write IOPs, and 80k – 120k Read IOPs. The benchmark results have never dipped below the stated minimum number. Even with ten other benchmarks running simultaneously generates the same numbers as if no other benchmarks were running. Even during a VDP performance test everything remains the same!

Vmotions happen in just seconds, and on average a host entering VSAN maintenance mode is just under a minute. The capabilities and performance so far is out of this world. If your near-line HCL + 10Gbe, Vmware VSAN truly delivers! Eventually I will put together some comprehensive benchmarks numbers for various scenarios.

Tags (1)
0 Replies