SteveEsx's Posts

I see the same on our Dell PowerEdge R740XD units but on the new Dell PowerEdge T440 that we got last summer it says directly that it is the BOSS card that is the problem. I have contacted Dell to g... See more...
I see the same on our Dell PowerEdge R740XD units but on the new Dell PowerEdge T440 that we got last summer it says directly that it is the BOSS card that is the problem. I have contacted Dell to get their feedback about the issue and they are looking into it.    
Our users typically deploy multi machine blueprints that might for example contain 6 virtual machines. Now in the vRA GUI it is very time consuming to do a VM action like Power On or Off on such ... See more...
Our users typically deploy multi machine blueprints that might for example contain 6 virtual machines. Now in the vRA GUI it is very time consuming to do a VM action like Power On or Off on such multi machine deployments because you have to do the action on each individual VM, there is no "Power On All VMs" on the parent object (parent object only has the actions "change lease/owner/security, destroy, expire, scale in/out and view details"). Is there a way to get action choices on parent object that applies to all VMs in that multi machine deployment? Things our users typically want to do at the same time on all VMs in a multi machine deployment is: Create Snapshot Power On Power Off If there is no built in functionality for that is it possible to vRO code it as a custom action on the parent object? Maybe someone else already found an efficient solution to this? Thanks for any tips.
I've now completed the upgrade to vRA 7.6 and verified that this works fine now. Thanks for the help.
Thanks I did not know that. I was planning to upgrade to vRA 7.5 anyway this summer. If I upgrade can I then change already existing deployed VMs or will it only apply to new deployments proce... See more...
Thanks I did not know that. I was planning to upgrade to vRA 7.5 anyway this summer. If I upgrade can I then change already existing deployed VMs or will it only apply to new deployments processed by vRA 7.4+? Also what is the procedure for doing the limit change on deployed VMs in the new version? I'm reading the docs now but it would be great if you could point me in the right direction.
I got a request to change our leases in vRA to be 365 days that can be renewed. That was easy enough to change on our blueprints with min days lease: 1 max days lease: 365, archival 90 days.  ... See more...
I got a request to change our leases in vRA to be 365 days that can be renewed. That was easy enough to change on our blueprints with min days lease: 1 max days lease: 365, archival 90 days.  But how do I change it on deployed machines in vRA 7.3? They still follow the limits that were set when they were deployed and can only be extended for 90 days. I googled around and found this post below but it does not seem to change limits and only does a lease extension within limits set if I understand it correctly: Update Lease date for a deployed VM from VRO Can I change limits on deployed machines with a vRO workflow or do I need to edit the vRA database? It does not look like PowerVRA has any commands for this but maybe the CloudClient? Thanks for any help with this.
The problem seems to be that console DNS settings are not correctly set during OVF/OVA deployment on the console (looks like a bug). Event though DNS settings are correct on the web mgmt page the... See more...
The problem seems to be that console DNS settings are not correctly set during OVF/OVA deployment on the console (looks like a bug). Event though DNS settings are correct on the web mgmt page the solution is to change DNS settings there and restart the VSM VA when prompted.
I found what to me seems to be a strange bug when deploying the Dell Equallogic Virtual Storage Manager (EQL VSM). When chosing to deploy the OVA file you input what DNS servers to use in the VA... See more...
I found what to me seems to be a strange bug when deploying the Dell Equallogic Virtual Storage Manager (EQL VSM). When chosing to deploy the OVA file you input what DNS servers to use in the VA. When the deployment is finished and the VA starts it is unable to time sync to NTP servers using DNS names, however using IP addresses works. Looking more closely at the VA I find that: 1.DNS servers are used correctly on the web page of the VA (in Configure VSM Properties) 2.A different DNS server set is used on the VA console (login on VA console with default login of root / eql) The DNS servers the VA choses to use in the console starts on 10.127.x.x which is something we have never used anywhere or told the VA to use during deployment. I have had Dell EQL support look at this but the only advice so far has been to redeploy the VSM which I have done 5 times now with the same results. Since the console only has an auto-start menu for the root login I'm unable to exit that menu and edit linux dns settings in console directly which is a bit annoying, maybe there is a workaround for that? Anyone else seen this problem? VSM version = VSM-3.5.2.1.ova
Thats interresting I have 5 x Dell PowerEdge R710 servers where I use the 4 embedded 1gb ports plus an additional intel 4x1gb port card plus 2 x 10gb dual port cards (but only 1 of the 4 10gb ... See more...
Thats interresting I have 5 x Dell PowerEdge R710 servers where I use the 4 embedded 1gb ports plus an additional intel 4x1gb port card plus 2 x 10gb dual port cards (but only 1 of the 4 10gb ports in active use). These servers have been stable for the last 6+ months with ESXi 4.1. The only reason I am using so many ports is that i have not had time to make the new vlan network design yet where I will only use the 10gb ports + 1 gb mgmt drac port. Maybe I should remove the redundant intel dual port 10gb cards then... Or has this been solved in vmware esxi5 or is it a motherboard design issue?
Hi Lars, Yes I agree with all you are saying but I think this is the most efficient way to have a simple iometer reference test (a "standard" test that it is easy to teach others to do). I wou... See more...
Hi Lars, Yes I agree with all you are saying but I think this is the most efficient way to have a simple iometer reference test (a "standard" test that it is easy to teach others to do). I would follow your suggestions if my goal were to have a competition between SAN products in performance, but my goal is simply to see systems that have major issues with either random or sequential i/o performance at other sites. My company frequently install heavy database solutions on old SANs that often perform very badly which is a waste of time and money, nice to have some numbers to show whats wrong in an efficent way. I guess one option would be to have a virtual machines and add extra VMDK disks on other LUNs to test them, that would also be efficient. But what do most people do when they post results here I wonder? I did my first tests here on a server with no other VMs on a LUN with no other VMs, my goal was to get the best result on a non busy environment so I can see the effects later when the environment is busy. I think the test file is large enough in this test, the VM has 8 gb of ram and the PERC controller and MD3000i controllers does not have more than 1 gb cache. However I'm rather new at using iometer before I used simpler tools like HDtune, HDtach. Those tools are not so good at showing random i/o performance as I think iometer is, and it is very nice to have some reference numbers and comments from this community. It might also have an effect on my own storage purchases in the future I guess Do you think 22k iops is impossible with 6 drives like Gabriel says here? For comparison I did a test on a physical PE2900 host with 8 drives and it got these numbers: W2k8 r2 enterprise Raid 10 8 x SAS Seagate 300 gb sas 15k Perc 6/i SEP11 AV installed SERVER TYPE:    Dell PowerEdge 2900 III CPU TYPE    / NUMBER: HOST TYPE: Dell PowerEdge 2900 III STORAGE TYPE / DISK NUMBER / RAID LEVEL: R10    8xSAS 15K Seagate 3.5" Perc 6/i Test name Latency Avg iops Avg MBps cpu load Max Throughput-100%Read 2.98 18170 567 2% RealLife-60%Rand-65%Read 15.79 3079 24 0% Max Throughput-50%Read 3.06 19046 595 3% Random-8k-70%Read 17.55 2813 21 0% However on this host the test file might have been too small I guess as the server has 32gb of ram and w2k8r2 does a lot of strange i/o caching. Just for fun I also tested a PE2900 server with mainstream intel 80gb g2 ssd drives and then it had an awesome performance in the random I/O as expected: W2k8 r2 ent Raid 5 8 x intel ssd 80gb gen2 Perc 6/i no AV SERVER    TYPE: Dell PowerEdge 2900 III CPU TYPE    / NUMBER: HOST TYPE: Dell PowerEdge 2900 III STORAGE TYPE / DISK NUMBER / RAID LEVEL: R5    8xIntel 80gb SSD gen2 Perc 6/i Test name Latency Avg iops Avg MBps cpu load Max Throughput-100%Read 3.04 19950 623 2% RealLife-60%Rand-65%Read 4.59 12480 97 1% Max Throughput-50%Read 3.12 18678 583 3% Random-8k-70%Read 5.21 10396 81 0% I know its impossible to make the perfect test but at least I hope to avoid doing some really bad mistakes that makes the numbers meaningless. And I'm hoping these numbers can be compared to others in this forum. Didn't know about the results only showing one cpu thats interresting. Thanks for your input, S
Hi Gabriel, I don't use DAS anywhere I have tested local server disks and SAN boxes. I guess with DAS you are refering to the internal raid with 6 disks on the R710 server?  Is 22k iops not no... See more...
Hi Gabriel, I don't use DAS anywhere I have tested local server disks and SAN boxes. I guess with DAS you are refering to the internal raid with 6 disks on the R710 server?  Is 22k iops not normal for internal raid 10 disks with 6 spindles? I don't have enough experience with this test to know that, I would love to know if I have done something wrong in the test I have followed what I think is the normal procedure here, created a virtual machine with different versions of windows server and installed iometer 2006 then run the test. The numbers are from the logfiles parsed by the web page http://vmktree.org/iometer/ I am not posting here to have the "best numbers" I want to find what is normal performance with a test that is repeated by other people so I can use that test as a reference for finding issues with i/o at other sites. This is the procedure I have done: created 3 virtual machines "iometer01", "iometer02", and "iometer03" (using 3 different VM scsi types) installed windows server and patched them with windows update installed iometer-2006.07.27.win32.i386-setup.exe (I'm not sure if this is the version everyone is using or not?) Opened the configuration file that contains the tests, added them to "assigned access specifications" Clicked the green flag to start test and created log file Parsed the logfile with  http://vmktree.org/iometer/ If there are some other tests you would like me to do to verify the numbers I'm happy to do so. Why some tests say 0% cpu load I have no idea I'm not an iometer expert, I have attatched the logfile from one test "iometer02-san01.csv" which is the test that seems to have the lowest cpu load so maybe you can see if something is not right there? Thanks for your input, interresting to know if the test numbers are sane.
Table over results: Physical   parameters Virtual   Parameters Max   Throughput-100% Read RealLife-60%-Rand-65%   Read Storage Raid Phys   Host vscsi   typ... See more...
Table over results: Physical   parameters Virtual   Parameters Max   Throughput-100% Read RealLife-60%-Rand-65%   Read Storage Raid Phys   Host vscsi   type vmdk Latency Avg   iops Avg   MBps cpu   load Latency Avg   iops Avg   MBps cpu   load Local   disks Perc H700 with 6 x 600gb SAS 15k 3.5"   - Raid 10 Dell   PowerEdge R710 LSI   Logic SAS 40   gb thick 2,66 22429 700 88 16,18 2957 23 80 Local   disks Perc H700 with 6 x 600gb SAS 15k 3.5"   - Raid 10 Dell   PowerEdge R710 LSI   Parallel 40   gb thick 3 19621 613 0 16,04 3018 23 27 Local   disks Perc H700 with 6 x 600gb SAS 15k 3.5"   - Raid 10 Dell   PowerEdge R710 Vmware   Paravirtual 40   gb thick 2,15 27769 867 32 15,9 3011 23 7 iSCSI   SAN Md3000i   4 disk dg/vd - Raid 5 Dell   PowerEdge R710 LSI   Logic SAS 40   gb thick 15,38 3907 122 16 36,71 1108 8 45 iSCSI   SAN Md3000i   4 disk dg/vd - Raid 5 Dell   PowerEdge R710 LSI   Parallel 40   gb thick 15,32 3904 122 0 35,71 1131 8 25 iSCSI   SAN Md3000i   4 disk dg/vd - Raid 5 Dell   PowerEdge R710 Vmware   Paravirtual 40   gb thick 15,14 3967 123 1 35,07 1119 8 18 iSCSI   SAN Md3000i   2 disk dg/vd - Raid 1 Dell   PowerEdge R710 LSI   Logic SAS 40   gb thick 15,17 3958 123 17 52,25 902 7 34 iSCSI   SAN Md3000i   14 disk dg/vd - Raid 10 Dell   PowerEdge R710 LSI   Logic SAS 40   gb thick 17,14 3520 110 16 15,45 3696 28 18 iSCSI   SAN Md3000i   14 disk dg/vd - Raid 5 Dell   PowerEdge R710 LSI   Logic SAS 40   gb thick 17,06 3535 110 16 19,49 2542 19 29 Local   SSD no raid - ESB2 intel - Crucial RealSSD   C300 2,5" 128gb Dell   Precision T5400 n/a n/a 7,15 8243 257 11 6,68 8629 67 9 Local   SSD no raid - ICH9 intel - Intel 80gb G2 M Dell   Latitude E6400 n/a n/a 9,31 6402 200 35 16,26 3305 25 56 Local   disks Perc 5/i - 4 disks 300gb sas 15k raid 5 Dell   PowerEdge 2950 n/a n/a 3,64 17175 536 5 37,42 1197 9 3 Hosts used in test: Host Model Cpu Memory i/o   controller Local   disk(s) OS NIC vSphere   server Dell   PowerEdge R710 2 x Intel   Xeon X5680 3.33 Ghz 6 core, 12M cache, 6.40 GT/s QPI 130W TDP, Turbo, HT 96 GB   memory for 2 cpu (12 x 8 GB Dual Rank RDIMMs) 1333 MHz Perc H700   Intergrated, 1 GB NV Cache, x6 Backplane 1 x   SDcard 6 x 600   GB SAS 6 Gbps 15K 3.5” , raid 10 VMware   ESXi 4.1.0 build 348381 on SDcard Embedded   broadcom Gbe LOM with TOE and iSCSI offload (4 port) & Intel   Gigabit ET Quad port server adapter PCIe x4 & Intel   X520 DA 10GbE Dual Port PCIe x8 Workstation Dell   Precision T5400 1 x Xeon   E5440 2,83 Ghz quad core 16 GB   Memory fully buffered dimm Intel   5400 chipset (intel ESB 2 Sata raid controller) 1 x   Crucial RealSSD C300 2.5” 128GB Sata 6 gb/s Windows 7   Enterprise x64 Broadcom   57xx & Intel Pro 1000 PT dual SA Laptop Dell   Latitude E6400 1 x 2.53   Ghz Intel Core 2 Duo 4 GB   Memory Intel   ICH9 1 x Intel   80gb SSD gen2 M Windows 7   Enterprise x64 Intel   82567 Physical   server Dell   PowerEdge 2950 2 x Intel   Xeon 5150 - 2.66 ghz dual core 4mb L2 cache 16 gb   memory 533 mhz Perc 5/i 4 x 300   gb 15k sas, raid 5 Windows   2008 R2 Enterprise Broadcom   BCM5708C NetExtreme II & Intel Pro/1000 PT Dual Port SA iSCSI SAN used in test: Dell PowerVault MD3000i – 15 x 600gb sas 15k (one global hotspare) 2 x PC5424 Dell PowerConnect switches (2 isolated iscsi subnets as recommended for MD3000i) LAN switches: Cisco 2960 series and Nexus 5010 Virtual Machines used: VM OS vcpu scsi vmdk Memory NIC VM HW   vers Iometer01 Windows   2008 R2 SP1 (x64) 1 (default) LSI Logic   SAS (default) 40 gb   thick 4 gb Vmxnet 3 7 Iometer02 Windows   2003 R2 Sp2 x64 2 LSI Logic   Parallel 40 gb   thick 8 gb Vmxnet 3 7 Iometer03 Windows   2008 R2 SP1 (x64) 2 Paravirtual 40 gb   thick 8 gb E1000 (default) 7 Comparison between virtual scsi types and guest OS iometer performance Comparison of VM Iometer01,02,03 running on local server disks to see if there is a noticeable difference on different guest OS and virtual scsi adapter type: SERVER    TYPE: VM iometer01 - W2K8 R2 SP1 x64 - LSI Logic SAS CPU TYPE    / NUMBER: 2 x Intel X5680 3.33GHz HOST    TYPE: Dell PowerEdge R710 STORAGE    TYPE / DISK NUMBER / RAID LEVEL: Perc H700 with 6 x 600GB SAS 6gbps 15k    3.5" - Raid10 Test name Latency Avg iops Avg MBps cpu load Max Throughput-100%Read 2.66 22429 700 88% RealLife-60%Rand-65%Read 16.18 2957 23 80% Max Throughput-50%Read 1.38 42340 1323 63% Random-8k-70%Read 17.52 2745 21 38% SERVER TYPE: VM iometer02 - W2K3 R2 SP2 x64 -    LSI Parallel CPU TYPE    / NUMBER: 2 x Intel X5680 3.33GHz HOST    TYPE: Dell PowerEdge R710 STORAGE    TYPE / DISK NUMBER / RAID LEVEL: Perc H700 with 6 x 600GB SAS 6gbps 15k    3.5" - Raid10 Test name Latency Avg iops Avg MBps cpu load Max Throughput-100%Read 3.00 19621 613 0% RealLife-60%Rand-65%Read 16.04 3018 23 27% Max Throughput-50%Read 1.34 39659 1239 0% Random-8k-70%Read 17.56 2751 21 26% SERVER    TYPE: VM iometer03 - W2K8 R2 SP1 x64 - VMware Paravirtual CPU TYPE    / NUMBER: 2 x Intel X5680 3.33GHz HOST    TYPE: Dell PowerEdge R710 STORAGE    TYPE / DISK NUMBER / RAID LEVEL: Perc H700 with 6 x 600GB SAS 6gbps 15k    3.5" - Raid10 Test name Latency Avg iops Avg MBps cpu load Max Throughput-100%Read 2.15 27769 867 32% RealLife-60%Rand-65%Read 15.90 3011 23 7% Max Throughput-50%Read 1.22 48797 1524 48% Random-8k-70%Read 17.50 2738 21 7% Logs: iometer01-local-01, iometer02-local-01, iometer03-local-01 Comparison of same VMs on MD3000i SAN: SERVER    TYPE: VM iometer01 - W2K8 R2 SP1 x64 - LSI Logic SAS CPU TYPE    / NUMBER: 2 x Intel X5680 3.33GHz HOST    TYPE: Dell PowerEdge R710 STORAGE    TYPE / DISK NUMBER / RAID LEVEL: Dell PowerVault MD3000i iscsi SAN, diskgroup    & virtual disk 0 with 4 disks using Raid 5 (database i/o type 128k    segment) Test name Latency Avg iops Avg MBps cpu load Max Throughput-100%Read 15.38 3907 122 16% RealLife-60%Rand-65%Read 36.71 1108 8 45% Max Throughput-50%Read 12.40 4816 150 17% Random-8k-70%Read 40.56 1103 8 41% SERVER TYPE: VM iometer02 - W2K3 R2 SP2 x64 -    LSI Parallel CPU TYPE    / NUMBER: 2 x Intel X5680 3.33GHz HOST    TYPE: Dell PowerEdge R710 STORAGE    TYPE / DISK NUMBER / RAID LEVEL: Dell PowerVault MD3000i iscsi SAN, diskgroup    & virtual disk 0 with 4 disks using Raid 5 (database i/o type 128k    segment) Test name Latency Avg iops Avg MBps cpu load Max Throughput-100%Read 15.32 3904 122 0% RealLife-60%Rand-65%Read 35.71 1131 8 25% Max Throughput-50%Read 16.82 3644 113 0% Random-8k-70%Read 40.50 1107 8 27% SERVER TYPE: VM iometer03 - W2K8 R2 SP1 x64 -    VMware Paravirtual CPU TYPE / NUMBER: 2 x Intel X5680 3.33GHz HOST TYPE: Dell PowerEdge R710 STORAGE    TYPE / DISK NUMBER / RAID LEVEL: Dell PowerVault MD3000i iscsi SAN, diskgroup    & virtual disk 0 with 4 disks using Raid 5 (database i/o type 128k    segment) Test name Latency Avg iops Avg MBps cpu load Max Throughput-100%Read 15.14 3967 123 1% RealLife-60%Rand-65%Read 35.07 1119 8 18% Max Throughput-50%Read 12.44 4791 149 1% Random-8k-70%Read 41.34 1105 8 12% Logs: iometer01-san-01, iometer02-san-01, iometer03-san-01 Comment: Different windows server OS and virtual scsi adapter types does not change the performance in a dramatic way (adding disks or changing raid systems has a much larger impact) However it looks like LSI Logic SAS uses a lot more cpu than the other types of virtual scsi adapters, I only did each test once so maybe more tests are needed to confirm that. Local server disks are much faster than the cheap MD3000i iscsi SAN for single VM performance Note: SAN diskgroup and virtual disk was only using 4 disks on the SAN box so the results cannot be directly compared to server disks Dell PowerVault Md3000i iSCSI SAN iometer performance with different configurations Comparison to see the effect with various raid and diskgroups: First tested with a small diskgroup & virtual disk/lun of 4 x drives using raid 5. Then I also tested on a 2 drive raid 1 lun on the iscsi SAN. After that I tested with a 14 drive disk group (the virtual disk does not fill all space then because of the VMware 2 tb limit). The 14 drive disk group I tested with raid 5 and raid 10. 1mb block on 4 spindles r5 iscsi san: SERVER    TYPE: VM iometer01 - W2K8 R2 SP1 x64 - LSI Logic SAS CPU TYPE    / NUMBER: 2 x Intel X5680 3.33GHz HOST    TYPE: Dell PowerEdge R710 STORAGE    TYPE / DISK NUMBER / RAID LEVEL: Dell PowerVault MD3000i iscsi SAN, diskgroup    & virtual disk 0 with 4 disks using Raid 5 (database i/o type 128k    segment) Test name Latency Avg iops Avg MBps cpu load Max Throughput-100%Read 15.18 3954 123 17% RealLife-60%Rand-65%Read 34.52 1141 8 47% Max Throughput-50%Read 12.45 4798 149 17% Random-8k-70%Read 41.05 1114 8 38% Raid 1 SERVER TYPE: VM iometer01 - W2K8 R2 SP1 x64 -    LSI Logic SAS CPU TYPE / NUMBER: 2 x Intel X5680 3.33GHz HOST TYPE: Dell PowerEdge R710 STORAGE    TYPE / DISK NUMBER / RAID LEVEL: Dell PowerVault MD3000i iscsi SAN, diskgroup    & virtual disk 0 with 2 disks using Raid 1 (database i/o type 128k    segment) Test name Latency Avg iops Avg MBps cpu load Max Throughput-100%Read 15.17 3958 123 17% RealLife-60%Rand-65%Read 52.25 902 7 34% Max Throughput-50%Read 12.40 4803 150 17% Random-8k-70%Read 59.63 919 7 23% Raid 10 – 14 disks SERVER    TYPE: VM iometer01 - W2K8 R2 SP1 x64 - LSI Logic SAS CPU TYPE    / NUMBER: 2 x Intel X5680 3.33GHz HOST    TYPE: Dell PowerEdge R710 STORAGE    TYPE / DISK NUMBER / RAID LEVEL: Dell PowerVault MD3000i iscsi SAN, diskgroup    & virtual disk 0 with 14 disks using Raid 10 (database i/o type 128k    segment) Test name Latency Avg iops Avg MBps cpu load Max Throughput-100%Read 17.14 3520 110 16% RealLife-60%Rand-65%Read 15.45 3696 28 18% Max Throughput-50%Read 14.29 4144 129 17% Random-8k-70%Read 13.66 3936 30 24% Raid 5 – 14 disks SERVER    TYPE: VM iometer01 - W2K8 R2 SP1 x64 - LSI Logic SAS CPU TYPE    / NUMBER: 2 x Intel X5680 3.33GHz HOST    TYPE: Dell PowerEdge R710 STORAGE    TYPE / DISK NUMBER / RAID LEVEL: Dell PowerVault MD3000i iscsi SAN, diskgroup    & virtual disk 0 with 14 disks using Raid 5 (database i/o type 128k    segment) Test name Latency Avg iops Avg MBps cpu load Max Throughput-100%Read 17.06 3535 110 16% RealLife-60%Rand-65%Read 19.49 2542 19 29% Max Throughput-50%Read 14.38 4112 128 17% Random-8k-70%Read 16.16 2754 21 38% Comment: As expected raid 10 gives better performance at the cost of less space. Random i/o sees a big improvement with more disks added to the raid. Note: I also tested with different block sizes and with and without storage i/o control, but these settings did not seem to have much of an impact on performance. Storage I/O control should only kick when doing multiple VM I/O loads so that is as expected I think. I will use a default block size of 8MB on all my datastores. Note: All tests are done without using jumbo frames on the iscsi traffic Physical test results for comparison SERVER    TYPE: Physical Dell Prec T5400 - Win 7 Enterprise x64 CPU TYPE    / NUMBER: 1 x Intel Xeon E5440 HOST    TYPE: Phyiscal Dell Prec T5400 STORAGE    TYPE / DISK NUMBER / RAID LEVEL: Crucial RealSSD C300 2.5" 128gb sata Test name Latency Avg iops Avg MBps cpu load Max Throughput-100%Read 7.15 8243 257 11% RealLife-60%Rand-65%Read 6.68 8629 67 9% Max Throughput-50%Read 11.13 5067 158 11% Random-8k-70%Read 5.45 10427 81 30% SERVER    TYPE: Physical Dell Latitude E6400 - Windows 7 Enterprise x64 CPU TYPE    / NUMBER: 1 x 2.53 Ghz intel core duo 2 HOST    TYPE: Physical Dell Latitude E6400 - Windows 7 Enterprise x64 STORAGE TYPE / DISK NUMBER / RAID LEVEL: Intel    80gb SSD gen2 M Test name Latency Avg iops Avg MBps cpu load Max Throughput-100%Read 9.31 6402 200 35% RealLife-60%Rand-65%Read 16.26 3305 25 56% Max Throughput-50%Read 62.80 903 28 21% Random-8k-70%Read 10.58 4996 39 37% SERVER    TYPE: Physical Dell PowerEdge 2950 - W2K8 R2 x64 CPU TYPE    / NUMBER: 1 x 2.53 Ghz intel core duo 2 HOST    TYPE: Physical Dell PowerEdge 2950 - W2K8 R2 x64 STORAGE    TYPE / DISK NUMBER / RAID LEVEL: Perc 5/i - 4 x 300 gb sas 15k - raid 5 Test name Latency Avg iops Avg MBps cpu load Max Throughput-100%Read 3.64 17175 536 5% RealLife-60%Rand-65%Read 37.42 1197 9 3% Max Throughput-50%Read 4.91 12721 397 3% Random-8k-70%Read 40.15 1161 9 1% Comment: As expected a server with raid of many disks is faster than single SSD on sequential throughput but slower on random I/O. Note: These are physical tests and since virtualization has some overhead they are usually faster than the virtualized servers (iometer01-03) in similar configurations. Also note that I have tested some mainstream SSD disks here which is not usually used in servers (server ssd costs a lot more), however it is still an interesting comparison when for example a developer has to choose between running a virtual machine in vmware workstation on an SSD laptop/workstation or use a shared VMware LabManager server with SAN storage.  The PE2950 server tested is a generation 9 Dell server and is much older than the generation 11 R710 servers, but that is one of the advantages of virtualization that you can buy new servers each year and move virtual servers to new hosts to upgrade the speed (then over time virtualization might actually be faster than the old model of buying a dedicated server for a solution and running it for 4 years).
I think I get strange random performance because I use thin provisioned disk in this test? 18.49 latency on local server disks on "random-8k" test with only 2658 iops seems very wrong? No other ... See more...
I think I get strange random performance because I use thin provisioned disk in this test? 18.49 latency on local server disks on "random-8k" test with only 2658 iops seems very wrong? No other load or any other VM on the server iometer version: 2006.07.27 SERVER TYPE:Dell PowerEdge R710 CPU TYPE / NUMBER: 2 x 6 core Intel HOST TYPE: VM w2k3 r2 enterprise x64 thin disk, paravirt scsi STORAGE TYPE / DISK NUMBER / RAID LEVEL: local 6 x sas 15k on perc raid controller Test name Latency Avg iops Avg MBps cpu load Max Throughput-100%Read 2.45 23690 740 36% RealLife-60%Rand-65%Read 16.95 2896 22 2% Max Throughput-50%Read 1.36 39750 1242 53% Random-8k-70%Read 18.49 2658 20 3% Max Throughput-100%Read 1.33 39542 1235 55% RealLife-60%Rand-65%Read 16.84 2921 22 6% Max Throughput-50%Read 1.35 40095 1252 53% Random-8k-70%Read 18.48 2663 20 7% Old style VMTN communities table: SERVER TYPE:Dell PowerEdge R710 CPU TYPE / NUMBER: 2 x 6 core Intel HOST TYPE: VM w2k3 r2 enterprise x64 thin disk, paravirt scsi STORAGE TYPE / DISK NUMBER / RAID LEVEL: local 6 x sas 15k on perc raid controller |*TEST NAME*|*Avg Resp. Time ms*|*Avg IOs/sec*|*Avg MB/sec*|*% cpu load*| |*Max Throughput-100%Read*|2.45|23690|740|36%| |*RealLife-60%Rand-65%Read*|16.95|2896|22|2%| |*Max Throughput-50%Read*|1.36|39750|1242|53%| |*Random-8k-70%Read*|18.49|2658|20|3%| |*Max Throughput-100%Read*|1.33|39542|1235|55%| |*RealLife-60%Rand-65%Read*|16.84|2921|22|6%| |*Max Throughput-50%Read*|1.35|40095|1252|53%| |*Random-8k-70%Read*|18.48|2663|20|7%|
iscsi is on isolated redundant pc5424 switches (i call the subnets iscsi green and iscsi grey since md3000i prefers to have 2 subnets on the iscsi configuration). I do have vmware traffic and ma... See more...
iscsi is on isolated redundant pc5424 switches (i call the subnets iscsi green and iscsi grey since md3000i prefers to have 2 subnets on the iscsi configuration). I do have vmware traffic and management on the same dvs switch "dvSwitch01vmnetwork" that uses physical NIC 0 and 5 I guess it is best practice to seperate those but I was thinking that dVs switch with 2 physical NICs would handle that well (these servers are mostly idle with less than 5% cpu load). The unresponsive hosts happens when moving data between SAN and local storage so that only uses the iscsi ports heavily, I have not seen much traffic on the other ports. virtual switches: vSwitch1 VMkernel iSCSI Green -> vmnic2 1000 Full vmk1 172.28.178.* vSwitch2 VMkernel iSCSI Grey -> vmnic6 1000 Full vmk2 172.28.179.* dVs switches: dvSwitch04hosting dvPg Hosting LAN vDs -> dvSwitch04hosting-DVUplinks-4642 (dvUplink1 - vmnic1, dvUplink2 - vmnic7, dvUplink3 - none) virtual machines on this portgroup dvSwitch01vmnetwork dvPg Internal (vlan 66) and dvPg LAN vDs -> dvSwitch01vmnetwork-DVUplinks-4281 (dvUplink1 - vmnic0, dvUplink2 - vmnic5) virtual machines on this port group and vmk0 service console 130.*.*.* dvSwitch02vmotion dvPortGroupVmotion -> dvSwitch02vmotion-DVUplinks-4284 (dvUplink1 - vmnic3, dvUplink - vmnic4) vmk3 vmotion kernel port 192.168.168.* on this port group I have not found a good naming standard for dVs configurations yet it is a bit confusing at first compared to the old vswitch system and I don't like the vcenter dependency so I might go back to using only vswitch....
Hi I have run a vmware cluster for a long time with PE2900 hosts and MD3000i SAN but in the last 6 months I have moved from PE2900 to the R710 dell servers and from "ESX classic enterprise" to... See more...
Hi I have run a vmware cluster for a long time with PE2900 hosts and MD3000i SAN but in the last 6 months I have moved from PE2900 to the R710 dell servers and from "ESX classic enterprise" to "ESXi Enterprise Plus" on SDcard. I use Dell PC5424 switches for iscsi traffic and Cisco 2960 switches for LAN traffic. I've also started using dVs switches for everything except the iscsi kernel ports.The hosts have 4 embedded giga ports and one extra quad port intel card. All connections are redundant (i.e. one cable from embedded ports and one from quad port card for each function: vmotion, lan, iscsi and lan2). Lately I've had a strange problem where I get hosts being unresponsive in the vcenter console when I do heavy I/O on a server like copying a vmdk file from local disk to san or vice versa or vmotion operations. The server is responsive on console and virtual machines run fine during these operations but the vcenter thinks the host is not responding. Vmotions often fail now, something that almost never happened before. I suspect the host is running fine but that the vcenter agent is just going crazy for some reason. I've tried restarting hosts but that did not help. My next steps are going to be: remove dVs switches and use traditional vswitch design again reinstall esxi, remove/add to vcenter again, configure iscsi again verify that i got all esxi patches reformat SAN luns test host vcenter connection stability during I/O again after changes or worst case even reinstall the whole vcenter Any other ideas what to do to fix this? Thanks for any input ES
Hi I have tested installing new virtual machines with Windows 2008 R2 on Esx 4u1 with vmware paravirtual disk boot drive and vmxnet 3 network card (vm version 7). This works fine on my ES... See more...
Hi I have tested installing new virtual machines with Windows 2008 R2 on Esx 4u1 with vmware paravirtual disk boot drive and vmxnet 3 network card (vm version 7). This works fine on my ESX hosts controlled by the new vCenter. However I'm wondering is it possible to convert these machines to VMware Workstation 7 with that disk type and network card? Have any of you experienced problems with that? Also what is the best vmware tools version to use if you want these images to be compatible? Or is it always best practice to remove and re-install the vmware tools when migrating images like this? I have googled these questions but could not find a direct answer for these new product version from vmware, most articles relates to pre esx 4u1 situations where it was not supported to use paravirtual boot drives. Thanks for any answers S
hi It seems to me like this bug still is not fixed? Or am I wrong? And I thought regional settings only messed up old programs hehehe