hello, to everybody.
currently I build up following infrastructure on vmware vsphere 5.1 Enterprise... the following is given:
hardware:
software:
now the big problem:
virtual networking in vcenter is built up as follows:
now i'm experimenting with the vmware best practice guide "oracle databases on vmware". interesting is, that my performance is the same bad on each server and virtual machine.
it's not the matter, if the virtual machine is with ms sql, oracle or just a windows 2008 server with nothing on it. iometer still brings bad results on the 4k blocks read and write.
now the question in here ist, what is wrong ?
many thanks for a solution
marc
when you converted the servers did you run post migration clean up processes?
things like clearing our shadow devices etc
I wouldn't P2V a database server personally. I know that you can do it and I know it can be successful, but I'd rather build a new server and migrate the databases to it manually. This will typically net a much better result.
agreed, but a amalgam of both usually works.
whenever I am P2V'ing a DB or Exchange server (read any server with heavy datachange) I get a migration window where I can stop all necessary services.
This leads to a clean migration. Never do a DB hot
This is the main reason that AD controllers cause so much issues, their services can not be shut down so are never in a quiesced state
it is not only the P2V machine. we also created brandnew machines on the infrastructure to build a test environment.
we only converted the production servers through the vm ware converter.
the performance is the same bad on the converted and on the new created vm's on the cluster.
actually i did a new disk marking,based on the vmware oracle best practive howto.
I think you have done everything right on the vSphere side.
What is the NetApp storage system telling you? Are you hitting any CPU spikes, back-to-back CP's, are you overdriving your spindles? What type of disks are you using? Any errors showing up in the messages file? I think your storage system is the most likely bottleneck at this point. Have you tried reaching out to NetApp? Their performance support folks are pretty good.
hello to all,
i've did a lot of testing the last two days and found out this:
on another infrastructure i have two physical hp dl380 g7 servers running for oracle rac and red hat linux enterprise.
the server are connected via iscsi (netword bonding an the redhat with 2 nics) on the same cisco switches (3750) and netapp storage (2240-2).
i found an i/o testing script on this page http://benjamin-schweizer.de/measuring-disk-io-performance.html an run it on one oracle server.
the results were:
in realtime the sysstat von the netapp says the following:
so this tells me, me performance on the iscsi of an physical server an my netapp is ok !
now it comes to the thing i dont understand.... i run the same test on a linux system in virtual infrastrukture:
looks ok, but in realtime on my netapp there is this going on:
so this is weird.... my linux in vmware tells me, i have good iops and throughput an my virtual system, but actually down on the netapp there is nothing going on.
how could this be, when i do this on a real hardware cluster with the same iscsi connection via the ciscos and the same netapp storage, that the perfomance is SO BAD ???????
something must be in the vmware that is breakting out my performance to the storage system, and no the switches are ok. we compared the configuartion of the ciscos in both enviromnets.
for the result:
physical servers -> 2 nics via bondig connected to the ciscos -> cisco puts its data through a trunk of 4 ports to the netapp -> netapp all 4 nics are connected to a virtual interface
virtual infrastruture -> 1 esx server -> 4 nics go to the cisco via iscsi multipathing (see config screenshot above) -> connection via netapp an ciscos 10GB/E over the mezzanine card from netapp an the 10GB modules from the cisco.
physical servers -> iops and througput ok
virtual infrastrukture -> iops and throughput miserable and not acceptable
PLEASE HELP !!
many thanks for help in advance.. im getting mad over this !
greetings marc
Multipathing policy was changed to Round Robin?
Yes, all storage datastores who are added are set to Round Robin (Mulitpathing) and all channels are set to active I/O
Hi. The RR was there by default or did you changed it? How did you do it? On CLI so that every new added LUN would be RR or manually on the vCenter??
Hi, first we created a vSwitch in vCenter. After that we configured the multipathing for iscsi. Then we created the software-iscsi hba and mapped a lun from the netapp. After that, we went in the properties auf die mapped lun (datastores) and change in "manage paths" from last used (VMware) to Round Robin. All these we did manually on the vCenter.
News from the HP Support, they think that the Quad Nics NC 364T are not supported for the DL 380 G7 Server series... ha ha ha... On the production page of HP there is a compatibily list and look at this -> HP Product Bulletin -> HP ProLiant DL380 G7... uuups !!!
From the VMware Support we get the answer to our call, that maybe the Software iSCSI HBA is the problem, and with that use VMs are limited to an IOps of 4000. They told us to use a real Hardware iSCSI HBA. Dear VMware Support, the DL380 G7 has 4 Broadcom OnBoard Nics which are shown as HW iSCSI HBA in the vSphere Client, and no one can tell use how to use them as Hardware iSCSI.... so now ???
Ok, you could just run a command to do it to all your luns but that is OK. Next thing I would do is to install the netapp plugin in vcenter and apply all recommended settings. If I remember, MPIO, NFS and another one I don't remember but it is important. Just go apply ALL recomended settings. It will change queu deth and some more. Thats is for sure the NepApp recommended settings so start there. I had a similar issue with a FAS 2240 with iscsi and vmknic binding for multipathing. When I changed to RR.... huge latencies, a lot of problems. Apply the settings reboot.
No. It shows HW iSCSI but it is not. Try to configure it on vCenter and you'll see. Just be sure to follow the instalation of the netapp plugin, the VSC and apply the settings, reboot and try.
oh and another thing... everything is on the sabe network right? no L3 in the middle...
Hello. We already installed the netapp VSC plugin on the vcenter. After the reboot from the servers all three Settings (MPIO, NFS, and the other one) are status "green". What i am wondering about is, next day, when i look into the venter -> netapp plugin, the settings of the Server for example MPIO is Status "red"... what is this ??? is doesn't stay at the green state.
sorry for my misunderstanding, but what do you mean with "sabe network" and "L3 in the middle" ?
sorry. same network.... no routing in bettween.
so a red state tells you something is not correct. The MPIO is the settings for the multipathing. Makes no scense calling it MPIO in my opinion but thats another discussion.
So go through the details and check for the red lines that show you exactly where the settings were not applied and could point you in the direction of your problem. You can check that by going to the same place where you apply the settings and the is somethings like "details". It is a LONG list of ALL settings so it can be tedious but in many cases worth the time. It will show you in red which settings failed to apply. In the mean while, I'll try to remember what I did to mitigate that problem. But check it out.
your netapp is active/active or active/passive with ALUA ??
nevermind that.