guusmeeuwis
Contributor
Contributor

Server monitoring - advice needed

Jump to solution

For our project server virtualisation I am currently monitoring the servers. The results should give information about the use of resources at the current situation and give an insight in the needed hardware for the new situation (we're thinking about moving to ESX).

The counters used to monitor are:

  • CPU usage
    % Processor Time

  • Memory usage
    Memory pages/sec

  • Disk input/output
    avg disk bytes/read en av disk bytes/write

  • Network usage
    NIC Bytes Sent/sec

I have collected the needed results but now comes the hardest part, the analyse of this result. What is a low/normal/high usage for the selected monitors? Can some1 give me any indication?

Note that I'm a trainee and not an expert yet Smiley Wink

Thanks!

0 Kudos
1 Solution

Accepted Solutions
WillemB
Enthusiast
Enthusiast

Guus,

This data is a typical company setting as far as I can see. From the data you supported I would comforably dare to say virtualization will be a good decision. I can see a virtualization ratio of atleast 1:4 being achieved without problems. My experience with virtualization (4 virtualization projects) is that the results are usually better than expected.

Please do not use my comment as the foundation for your decision to virtualize, make sure your comfortable too.

The only important data i'm missing is the number of servers. When calculating the number of servers virtualizable keep in mind things like dongles, usb devices etc. They cannot be (or be harder) virtualized and could bring down the number of potential VM's making virtualization less interesting.

Also look at virtualization from a process level. Can your company cope with the change in the managing of IT systems. Also take into account the migration from physical to virtual can bring up alot of questions like "Why should I migrate my system is working nicely and we're making money? I don't want to virtualize.". Virtualization is just as much a change in the IT process as in technology shift.

View solution in original post

0 Kudos
12 Replies
Borat_Sagdiev
Enthusiast
Enthusiast

There is only one answer for this question: Platespin PowerRecon. Though Recon is not cheap (US cost is about $2/server day - which is one server monitored for one day, do the math), it is hands down the best product money can buy for this task.

0 Kudos
guusmeeuwis
Contributor
Contributor

Thanks, but note that I allready collected the data. So I only need to analyse the result. I dont really want to use another tool for analysing the result.. would prefer some explination about how to!

0 Kudos
Borat_Sagdiev
Enthusiast
Enthusiast

What did you use to collect the data? Perfmon? The nice thing about Recon is that it gives you the ability to scenario model with your collected data unlike a lot of other apps that just leave you with a spreadsheet full of numbers. Not sure what to recommend for raw data pulled from another application.

0 Kudos
tgradig
Hot Shot
Hot Shot

Your asking the million dollar question. To virtualize or not to virtualize. What it ESX environment like and what type of server are you trying to virtualize? That will depend on answers given to your question.

CPU and memory is the biggest factor we have in our environment. If you have a server with 2 CPU's running at 50%, then it will take up 1 CPU on the ESX box. Every server you put on virtual will require a time slice of a CPU. If you want a guest with 2 processors in a virtual environment, it will require time slices from two different procs at the same time. We try to keep ours at 1 cpu per guest.

Memory. Our typical build is 1GB per server, but the guests only use about 512Mb or less. It will all depend on how much memory each ESX host has.

If you are new to ESX, I would recommend trying to find a user group in your area and try to attend. This are great questions to bring up.

0 Kudos
guusmeeuwis
Contributor
Contributor

I used PA Server Monitor Pro , because it's an agent-free monitor and it creates good looking statistics! But, as you allready mentioned, it doenst analyse the collected data. For the next project i will take your advice and use a better tool.. as for now: back to the question Smiley Wink

0 Kudos
guusmeeuwis
Contributor
Contributor

Thanks for helping,

At the moment we're thinking about test-, small app- and fileservers. These servers are using less then 20% of their cpu and less then 500 pages/sec memory . There are a lot of different types of servers, but the type most used is the HP ML370 G5 ( 2,33 ghz (2 cpu) and 1GB RAM). So far for the CPU it's clear that the servers mentioned are easy to virtualise, but I am worried about the network/disk io and memory usage.. Dont want to have a bottleneck there in the future.

0 Kudos
guusmeeuwis
Contributor
Contributor

Thanks for helping,

At the moment we're thinking about test-, small app- and fileservers. These servers are using less then 20% of their cpu and less then 500 pages/sec memory . There are a lot of different types of servers, but the type most used is the HP ML370 G5 ( 2,33 ghz (2 cpu) and 1GB RAM). So far for the CPU it's clear that the servers mentioned are easy to virtualise, but I am worried about the network/disk io and memory usage.. Dont want to have a bottleneck there in the future.

0 Kudos
WillemB
Enthusiast
Enthusiast

As stated this is "the million dollar question". Since you haven't used a capacity planner dedicated to size your environment it will become harder to interpret the data. I do not know of any tooling that can be used to do the job based on self-collected metrics.

You could self-interpret the data but the results will be less trustworthy and it's a very big and hard job. Many people in this forum will be against this method. I would advise against it but if you have no choice it might give you an idea on what to expect.

-> Determine what kind of host machine your going to use and figure out the cpu type and speed.

-> Scale all results to the chosen type of host CPU (I've used CPU benchmarks to do this).

-> Combine average CPU results (add them together).

-> Then divide by 80% of a CPU capacity and you'll get the absolute minimal cpu's needed to run the systems at 80% load.

-> Then count memory and diskspace to figure out the memory and SAN/NFS requirements.

-> Disk I/O's combined should be allowed by your NAS/SAN

-> Network should be able to process disk I/O's if SAN/NFS is used and your regular network load. (Can be split into multiple nic's 1 for data and 1 for san/nfs)

Hope it helps

P.S> There are tons of whitepapers of which some companies will have the same load characteristics as you.

tgradig
Hot Shot
Hot Shot

If it's only small apps, dev and fileshares, I bet your good on network traffic. Just make sure that you build your ESX environment with gig connections. Usually 1GB connection for Service Console, 1GB connection for VMotion and 2GB connections for Network Traffic. This is our setup and we have over 20 guests (different types of production servers running) on a DL585 and network isn't a problem for us.

guusmeeuwis
Contributor
Contributor

Well my analyse should only give an indication: is virtualisation possible and how? The precise needed set-up for the new situation will be determind by specialist. My part of the project is more to sum up relevant benefits and disadvantages. I will give an indication about a possible set-up, but this is nothing more then just a raw sketch, which means that the outcome of my . But i certain will take your comments as an advice for future projects.. Smiley Happy I do understand my approach isnt the best, after all. But in my planning its impossible to re-start monitoring.

So, i have to do it the hard way i guess. Note as stated above, its only to indicate and not to determine the precise setup.

I've globally analysed the different servergroups. These results are the maximum usage (without peaks) on busy hours.

-


Applications

CPU max 20% usage

Memory 200 pages/sec usage

Disk io 60.000 avg. disk/bytes read (same for write)

Network 30.000 bytes sent/sec

Test

CPU max 10% usage

Memory 80 pages/sec usage

Disk io 30.000 avg. disk/bytes read (same for write)

Network 10.000 bytes sent/sec

Backup

CPU max 8% usage

Memory 300 pages/sec usage

Disk io 5.000 avg. disk/bytes read (same for write)

Network 8.000 bytes sent/sec

Is this collected data a good start to indicate the difference between the usage of resources for the different servergroups?

0 Kudos
WillemB
Enthusiast
Enthusiast

Guus,

This data is a typical company setting as far as I can see. From the data you supported I would comforably dare to say virtualization will be a good decision. I can see a virtualization ratio of atleast 1:4 being achieved without problems. My experience with virtualization (4 virtualization projects) is that the results are usually better than expected.

Please do not use my comment as the foundation for your decision to virtualize, make sure your comfortable too.

The only important data i'm missing is the number of servers. When calculating the number of servers virtualizable keep in mind things like dongles, usb devices etc. They cannot be (or be harder) virtualized and could bring down the number of potential VM's making virtualization less interesting.

Also look at virtualization from a process level. Can your company cope with the change in the managing of IT systems. Also take into account the migration from physical to virtual can bring up alot of questions like "Why should I migrate my system is working nicely and we're making money? I don't want to virtualize.". Virtualization is just as much a change in the IT process as in technology shift.

View solution in original post

0 Kudos
guusmeeuwis
Contributor
Contributor

Indeed. My input mostly covers the management/changeprocess part. But, as TCO is an important aspect in all of this, i really want to give an overview in the current and new it infrastructure. I've writen down your notes and discovered some whitepapers. I will update the topic when making progress!

Thanks!

0 Kudos