VMware Cloud Community
Tarucan
Contributor
Contributor
Jump to solution

Datastore speed related to?

Hi all, i wonder if VMs disk performances are related to them relative datastore connection speed to esxi7 installation disk and the datastore. Let me explain and i hope someone could answer me.

My case: 

esxi installed on internal SSD sata3 @ 6Gbps

datastore#1 on internal SSD sata3 same of esxi7 installation

datastore#2 usb3.0 connected external NAS with SSD inside

datastore#3 2.5Gbps lan NAS NFS4 connected.

-------------------------------------------

TEST Results:

434MB/s disk speed test (linux command line dd ...) on VM1 hosted on internal SSD

115 MB/s on VM2 hosted on usb3.0 NAS with HDD sata3 6gbps

78 MB/s on VM3 hosted on 2.5Gbps LAN NAS with SSD

as you can see the VM3 is much slower even if the SSD compared to usb3 HDD

My doubt is, do the esxi7 VM run all into host RAM and when ram finish they will use them own disks?

its all handled like a cache hosted into host ram?

Cause i must decide the best configuration for a microdatacener

0 Kudos
1 Solution

Accepted Solutions
degvm
Enthusiast
Enthusiast
Jump to solution

degvm_0-1682103058104.png

Just no high peak, just basic operation. During backup at midnight, it raises. But keep an eyes 2000 MB/s, only 30.000 IOPS because of NFS and latency below 1ms. So ... not bad.

 

View solution in original post

5 Replies
degvm
Enthusiast
Enthusiast
Jump to solution

Hi, we are using Network attached storage attached with NFS 3 over 10/25 gbit network card. the perfomance is about 2000 MB/s. It depends of the storagesystem, to provide 50.000 IOPS with 12-14 SSDs. 

We are running a small 2 node VMware Cluster with a Netapp as small site configuration.

NFS needs some tweaking, the change some adv. parameter to fit against the netapp storage. Perhaps you should also investigate in tuning. our "normal" speed with NFS is about 400 MB/s, but we saw also some +5000 MB/s dropping out of the storage, but with a 12 node VMware Cluster and 25/40 Gbits Nic.

Tarucan
Contributor
Contributor
Jump to solution

hard to believe to these numbers. 

0 Kudos
Tarucan
Contributor
Contributor
Jump to solution

In the end after many tests we abandon the LAN nas in favour of pcie 3.0 8x hardware raid controller.

which guarant some impressive speed, cap the SSD speed, and more with cache.

This is the only serious option for production where speed and low latency is a must.

0 Kudos
degvm
Enthusiast
Enthusiast
Jump to solution

degvm_0-1682103058104.png

Just no high peak, just basic operation. During backup at midnight, it raises. But keep an eyes 2000 MB/s, only 30.000 IOPS because of NFS and latency below 1ms. So ... not bad.

 

degvm
Enthusiast
Enthusiast
Jump to solution

degvm_0-1682104520768.png

closed to 5000