VMware Cloud Community
Bruno9873
Contributor
Contributor
Jump to solution

Starting from scratch - need advices

Hi everybody. This is my first post here and i'm wondering if people can help me design a good branch office solution.

Just to do a quick overview of what's the site before diving in the subject, i have an HP ProLiant ML350 G5 server hosting every single role he can. (DC, DHCP, DNS, File Server, Print Server, Antivirus, etc). The situation is that people are complaining about speed issues and latency while accessing the FS.

I'm wondering about deploying VMware Essential Plus package to have the benefits of vMotion and HA features. There are a lot of features actually with VMware and i'm a bit lost in all those. So if you guys find something that best suits the deal here, please feel free to tell.

My prospect configuration is going like this (hardware):

HP ProLiant DL385 G7's (twice like this)

AMD Opteron 6134 (2.3GHz/8-core/80W/12MB)

4x4Gb of RAM PC10600R-9

2x 500Gb 6Gb SAS 7.2k rpm

Redundant PSU 460W

I'm planning to install both servers in cluster for redundancy and fault tolerance.

ESX-1 VMs: Print Server, Secondary DC and Main File Server (Huge A-CAD files and plans, etc)

ESX-2 VMs: WSUS, Antivirus, Primary DC (dhcp) and 2nd File Server (Also a big one but not like Main)

The primary concern here is about a good SAN solution I believe. I'm not expecting to take the internal HDs (maybe for ISOs or for some testing / not important files). Those are gonna be used only in Raid1 like for internal redundandy if anyone fails i'm still fine.

I need some advices on selecting the right SAN. There are a bunch of ways to plug it to the server and i'm wondering what is, not only the best one, but the one that suits my needs. I know FC would be good (overkill?) so iSCSI could also be. Some others prefer NFS? I don't know which one to look at. I had a look at QNAPs TS-879U-RP which does 10GbE (seems important for throughput right?). Some buddy suggests the IBM DS3512 or 3400.

Anyone experienced good thing with one of those solutions? Also, as an SMB, cost effective is a primary concern also. Let's say i'd pay 4 500$  for a server (9 000$ for 2), VMware's licence is 5000$, licences for Win2008 are 800$ (planning 7 servers: 5 600$). I'm at 24 000$ more or less. Let's say I don't want to pay 10 000$ for a complete NAS (HDs included). I don't include the training which would be 3 days, a plane and hotel, etc... I can read whitepapers lol or assist webinares Smiley Happy

Thanks for the help!

Tags (3)
Reply
0 Kudos
1 Solution

Accepted Solutions
Josh26
Virtuoso
Virtuoso
Jump to solution

Bruno9873 wrote:

Also i'm considering about just buying one server, install ESX 4.1 free version and see how it goes. Indeed, all my VMs will be on the same server but i could build it a little bit stronger. Like having dual-everything configuration (Controllers, NICs, PSUs, etc).

Buying one server and a good NAS with high throughput, i think my solution would be good. Until now they always had a single server .. That server would be used in the future for test purpose or anything else.

What do you think?

If you're buying one server, why buy a NAS? Local storage is cheaper.

It boils down to this.

That load can easily run on one server. If you're buying two, it's because you want redundancy.

If you're buying another server to get redundancy, you must have a NAS become your single point of failure. If you buy a cheap NAS that isn't significantly more reliable than either server, you've actually gone backwards from a single server environment.

View solution in original post

Reply
0 Kudos
5 Replies
weinstein5
Immortal
Immortal
Jump to solution

Welcome to the Community -  The hardware looks good the only thing I would recommend is to increase the amount of memory - 16 GB of memory is rather light. You will also need a machine and it can be virtual to house your vcenter server.

In terms of SAN I would look at either iSCSI or NAS for the only reason is you would not have to put in a FC network that will save money.

I would also recommend training particularly VMware's 5 day ICM class - it not only will give a good framework as to how virtualization works but will give you thte tools and knowledge to implement the cluster effectively and to add the interaction with the other students and instructor will strengthen your knowledge.

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
Bruno9873
Contributor
Contributor
Jump to solution

Thanks for your reply.

Indeed the amount of memory is a bit low. I was wondering if I was buying 2 servers with 16Gb, the memory would've been added to a Pool for having 32Gb total?

As for the SAN/NAS/iSCSI, i already have a FC network. That wouldn't be a problem. Just a matter of bandwidth here. But i don't think FC is necessary for my needs. People are just complaining about big latency occuring while accessing the file system. Which I think comes from the RAID configuration inside the box .. i guess the controller doesn't handle I/O well enough and then user experience is going bad every time someone is Reading/Writing a file.

Maybe by just moving the file services from this server would do the job. But I want a virtualized infrastructure and that would be the good timing to do it.

Also i'm considering about just buying one server, install ESX 4.1 free version and see how it goes. Indeed, all my VMs will be on the same server but i could build it a little bit stronger. Like having dual-everything configuration (Controllers, NICs, PSUs, etc).

Buying one server and a good NAS with high throughput, i think my solution would be good. Until now they always had a single server .. That server would be used in the future for test purpose or anything else.

What do you think?

Reply
0 Kudos
Josh26
Virtuoso
Virtuoso
Jump to solution

Bruno9873 wrote:

Also i'm considering about just buying one server, install ESX 4.1 free version and see how it goes. Indeed, all my VMs will be on the same server but i could build it a little bit stronger. Like having dual-everything configuration (Controllers, NICs, PSUs, etc).

Buying one server and a good NAS with high throughput, i think my solution would be good. Until now they always had a single server .. That server would be used in the future for test purpose or anything else.

What do you think?

If you're buying one server, why buy a NAS? Local storage is cheaper.

It boils down to this.

That load can easily run on one server. If you're buying two, it's because you want redundancy.

If you're buying another server to get redundancy, you must have a NAS become your single point of failure. If you buy a cheap NAS that isn't significantly more reliable than either server, you've actually gone backwards from a single server environment.

Reply
0 Kudos
hbato
Enthusiast
Enthusiast
Jump to solution

If your are looking for future expansion, you might want to invest on a SAN (better performance and reliablity) but if this will be your setup, as what josh said a single server will do.

Regards, Harold

Bruno9873
Contributor
Contributor
Jump to solution

Yes, I think that would be a good solution. For that site, a SAN or a NAS might not be required as it's not a mission-critical site and will not grow that much for the next months/years.

Maybe you can tell me what you think about this quick config I sorted from Dell's:

Dell PowerEdge R510
Intel Xeon E5649 2.53GHz, 12M Cache, 5.86 GT/s QPI, 6C 
Intel Xeon E5649 2.53GHz, 12M Cache, 5.86 GT/s QPI, 6C 
32GB Memory (8x4GB), 1333MHz Dual Rank LV RDIMMs for 2 Processors, Optimized  (More than necessary but not too much)
4x 600GB 15K RPM SAS 6Gbps 3.5in Hot-plug Hard Drive 
RAID 10 for PERC6i, PERC H200 and H700 Controllers, x8 Chassis  (I think RAID10 is a good solution for performances)
PERC H700 Integrated RAID Controller 512MB Cache  (Is that a good controller?)
DVD ROM, SATA, Internal 
750 Watt Redundant Power Supply  (Some 'safety')
2x Intel Gigabit ET Dual Port NIC, PCIe-4  (Better throughput with 4 NICs?)

It is mostly a general question but i'm curious. I would like to have a precision on the number of processors vs Nb of VMs i'd like to have ... How does that works? I mean, logically, if my server has 12 Cores.. can't have more that 12 VMs on that server? As it's dual 6 cores .. but i guess those are 6C have 12 threads each .. is it a 24 VMs server available?

Thanks again guys.

Reply
0 Kudos