VMware Cloud Community
IT_Architect
Enthusiast
Enthusiast
Jump to solution

I need a storage strategy

I would like to leverage this group for ideas.

- 2 Machines, Single Processor Quad Core Xeon 5430 - 2.66GHz (Harpertown), 4 Gigs, RAID-1 SCSI, 146 GIG 10,000 rpm.

- VMware ESXi on both.

- Operating system will be FreeBSD 7.1 64 on VMs.

- Manual Fail-over for all 5 VMs and load balancing for one. E.G. Change IP assignments in control panel when there is a problem. Load balancing round-robin DNS.

This will not be a typical VMware environment. My interest is not consolidating servers but rather portability, reserve performance, and failover. The load balanced site is a web server that serves between 12,000 and 45,000 unique visitors an hour. The disk controller is isn't a cheapie so it's quite it bit faster than a single SCSI on reads, and has no visible penalty on writes, not that it matters as I'm not disk-bound in the native environment. In the native environment I use most of the 4 gigs of ram, which is why we are moving to 64 bit. We're have proven we can run this on a single dual AMD server, but it's tapped too. Since we are moving this site anyway, I decided to use this as an acid test for ESXi. If it can stand up under this, I don't need to worry about using it anywhere. After all, server consolidation is to make one busy server anyway, right? This will make it busy.

Besides being open to critique of the above plan, I don't feel I have a good storage strategy. I need your thoughts on a strategy for the SCSI, storage of VM backups and VM templates, and backups from the operating systems of the user data.

Thanks for your input!

0 Kudos
1 Solution

Accepted Solutions
Texiwill
Leadership
Leadership
Jump to solution

Hello,

One of the things that I heard in the in the webinar was NOT to setup local VMFS volumes, not even for swap. I don't have FC available, but I do iSCSI and NAS. I have a kvm, the box is on the compat list, but I cannot get them to install a USB key to boot from, so I guess I'll need a small mirror to fire up the ESXi so I can hook the VMs on the iSCSI. Since we are talking software initiators, we're talking about moving the SCSI processing from the HBA to the CPU, making CPU and RAM all the more important. What say you about this?

Yes, That is exactly the case.

I read at: about the VMFS.

"Clustering - The second and vitally important feature of VMFS is support for sharing or clustering. VMFS is responsible for the coordination of multiple independent ESX servers safely reading and writing to the same block device. It is VMFS which handles the basic safe transfer of ownership of data from server to server during high level VI activities such as vmotion, DRS, storage vmotion, HA, etc."* Comment in webinar: **"RDM is required for clustering when you have twoVMs on two different ESXi servers."

The last statement is for Shared Disk Clusters from Within the VM. Not for storing VMs. You may not use local VMFS. Even so I always like having one for redundancy reasons. What happens if the storage array dies for some reason? I would have enough local disk to run the VMs from there if necessary.

I would like to clearly understand the meaing of those statements. E.G. are we saying in a load balance situation you have more than one server accessing the same VM data on the same LUN? If so, how is that arbitrated? Surely DBMSes would be a special case.

The VMFS is cluster aware, you will need to store your VMs on a Cluster aware filesystem, that could be NFS, or VMFS over iSCSI.

What NFS/iSCSI Servers are you considering using?


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.

Blue Gears and SearchVMware Pro Blogs: http://www.astroarch.com/wiki/index.php/Blog_Roll

Top Virtualization Security Links: http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill

View solution in original post

0 Kudos
9 Replies
Texiwill
Leadership
Leadership
Jump to solution

Hello,

- 2 Machines, Single Processor Quad Core Xeon 5430 - 2.66GHz (Harpertown), 4 Gigs, RAID-1 SCSI, 146 GIG 10,000 rpm.

I would use 2 Six/Eight Core CPUs. Remember at least 1 core is in essence dedicated to the hypervisor. This also depends on the number of VMs you wish to run. With only a Quad Core you could run only 1 Dual vCPU VM safely. I would want to run all 5 on one host in case of emergency (2*5=10vCPU1 for Hypervisor=11vCPUs or 11 cores). I would also install minimally 24GBs of memory as you maxed out 4GBs in the physical system and at most .5 GB could be used by the hypervisor leaving only 3.5 or so GB for the VM (4GBx5VMs=20GBs1 for Hypervisor=21GB) I would also use 6-8 pNICs for your Management, Storage, Vmotion, and VM networks. Check out my Topology Blogs for assistance and explanation with networking. Plan on failure of at least on node and moving all VMs onto the second node. THis of course depends on whether or not your site can handle any downtime at all. If you do not have to have all 5 on all the time then you could lower CPU and Memory requirements, or if you can also deal with memory and CPU overcommit that will lower over all performance you also can lower memory and CPU constraints.

- VMware ESXi on both.

This is fine.

- Operating system will be FreeBSD 7.1 64 on VMs.

Since FreeBSD 7.1 is not on the official supported list you may run into support issues here. It most likely will work but be aware that the only VMware Tools that have a chance of working are the OpenSource VMware Tools.

- Manual Fail-over for all 5 VMs and load balancing for one. E.G. Change IP assignments in control panel when there is a problem. Load balancing round-robin DNS.

This will not be a typical VMware environment. My interest is not consolidating servers but rather portability, reserve performance, and failover. The load balanced site is a web server that serves between 12,000 and 45,000 unique visitors an hour. The disk controller is isn't a cheapie so it's quite it bit faster than a single SCSI on reads, and has no visible penalty on writes, not that it matters as I'm not disk-bound in the native environment. In the native environment I use most of the 4 gigs of ram, which is why we are moving to 64 bit. We're have proven we can run this on a single dual AMD server, but it's tapped too. Since we are moving this site anyway, I decided to use this as an acid test for ESXi. If it can stand up under this, I don't need to worry about using it anywhere. After all, server consolidation is to make one busy server anyway, right? This will make it busy.

You may wish to consider VMware Vmotion and HA as possible solutions to failover issues caused by hardware and software issues. I would also consider a third machine just in case it is hard to get these machines if one fails. N+1 is the best solution for hardware. This would also allow you to lower overall Memory/CPU requirements.

Besides being open to critique of the above plan, I don't feel I have a good storage strategy. I need your thoughts on a strategy for the SCSI, storage of VM backups and VM templates, and backups from the operating systems of the user data.

I would use redundant shared storage either IP or FC-HBA. For your case, I would look at the medium level products that deliver the best performance.

Before going much further however I would measure three things:

1) How much CPU each physical box actually uses

2) How Much data is transfered per request (this will help determine bandwidth requirements and how many additional pNICs will be required as well as the type of pSwitches you may need.)

3) How much disk IO is happening per request. This will help you gauge what storage subsystems will work best for you. You may find you need 10GB pNICs for the best IP Storage or FC-HBA to handle everything.

These numbers will help guide your decisions from here.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.

Blue Gears and SearchVMware Pro Blogs: http://www.astroarch.com/wiki/index.php/Blog_Roll

Top Virtualization Security Links: http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
IT_Architect
Enthusiast
Enthusiast
Jump to solution

Thank you for your response. I kept checking back and nobody had replied. After you did, I didn't get the e-mail that I normally get, which is why it took tme so long to repsond.

It appears my plan needs revising, and your opinions are supported by a webinar I just finished.

Thanks tons!

0 Kudos
dilidolo
Enthusiast
Enthusiast
Jump to solution

It all comes down to your budget, 4G memory is too low.

VMware tools runs just fine with 7.1 x64, I've been running it since beta without any issue.

If you do not want to spend money on HA/VMotion, create 2 VMs to run nginx as a proxy server in CARP fail-over, put 1 on each ESXi, then create a few VMs to run web server behind nginx. DNS round-robin has limitation, if 1 node goes down, traffic is still sent to that node while nginx is able to detect node status.

IT_Architect
Enthusiast
Enthusiast
Jump to solution

Aaaarg! I just lost another message. They need a vBulletin here. It's like what, $180 for an owned license?

Texiwill

One of the things that I heard in the in the webinar was NOT to setup local VMFS volumes, not even for swap. I don't have FC available, but I do iSCSI and NAS. I have a kvm, the box is on the compat list, but I cannot get them to install a USB key to boot from, so I guess I'll need a small mirror to fire up the ESXi so I can hook the VMs on the iSCSI. Since we are talking software initiators, we're talking about moving the SCSI processing from the HBA to the CPU, making CPU and RAM all the more important. What say you about this?

I read at: about the VMFS.

"Clustering - The second and vitally important feature of VMFS is support for sharing or clustering. VMFS is responsible for the coordination of multiple independent ESX servers safely reading and writing to the same block device. It is VMFS which handles the basic safe transfer of ownership of data from server to server during high level VI activities such as vmotion, DRS, storage vmotion, HA, etc." Comment in webinar: *"RDM is required for clustering when you have twoVMs on two different ESXi servers."

I would like to clearly understand the meaing of those statements. E.G. are we saying in a load balance situation you have more than one server accessing the same VM data on the same LUN? If so, how is that arbitrated? Surely DBMSes would be a special case.

dilidolo

VMware tools runs just fine with 7.1 x64, I've been running it since beta without any issue.

Excellent! And from another FreeBSD user who can easily guess how I got dragged into FreeBSD.

*...create 2 VMs to run

nginx as a proxy server in CARP fail-over, put 1 on each ESXi, then

create a few VMs to run web server behind nginx. DNS round-robin has

limitation, if 1 node goes down, traffic is still sent to that node

while nginx is able to detect node status.*

You perceived correctly that I don't want to invest heavily before knowing if VMware will hold up and make sense. If VMware does make sense, the cost of VMware Enterprise will make sense. I'm not here to save money on server consolidation. I don't know if that ever actually makes sense. My current plan I KNOW does, and the only thing I need to do is juice the hardware. I'm here for all of the other VMware advantages like portability, upgradability, expandability, fail-over,etc. I'm a little tired of having a crisis once a year due to growing pains. I'll have to think through what you have said. I'm not familiar with that. I only know that that a hardware one that will handle a lot of traffic is a lot a month so you can bet I will. I understand the round robin part. We just manually switch it, but I like your way better. Nothing I've used works that well even if it's automatic. Servers are seldom down, they just aren't serving pages with all the content. I will research that and let you know. I may just set up the serving first to see ESXi can even handle it reliably. I know how much hardware I need now. I shouldn't need more than 25% more, and if I have twice as much, it should handle it. I haven't used iSCSI. We have local arrays at the moment.

Thanks both for your valuable help.

0 Kudos
wila
Immortal
Immortal
Jump to solution

Aaaarg! I just lost another message. They need a vBulletin here. It's like what, $180 for an owned license?

Do you mean like this?

Happens to me all the time, but if I'm the only one complaining it is not that likely to get fixed soon... For the record, this used to work without loosing your work, if that is what is happening then please chip in on that thread as it is posted in an area that is monitored by the forum development team.

Not getting all replies is another issue with the forum I also see here and it is "being fixed" for months now... ;(



--

Wil

_____________________________________________________

Visit the new VMware developers wiki at http://www.vi-toolkit.com

| Author of Vimalin. The virtual machine Backup app for VMware Fusion, VMware Workstation and Player |
| More info at vimalin.com | Twitter @wilva
0 Kudos
wila
Immortal
Immortal
Jump to solution

Servers are seldom down, they just aren't serving pages with all the content. I will research that and let you know.

Very true, with all the attention on HA this is something I hardly ever hear about.. apache/IIS hardly ever dies itself, nor does the hardware, it is the middleware, database server, switch, guest that runs out of memory, you name it.... If you only depend on apache/IIS specific tests then your site will still be percieved as down by the clients, It is not hard to write some uptime tests for that which test that the whole shebang is working.. but hey need to be run from another host, preferably even from another location.

For example I run automated tests for a customer of mine on the other side of the world and that works out really well, but recovery isn't fully automated (it could be when coupled with a round robin solution like mentioned here) I just get emails when there's something wrong.

For them it is OK, especially due to the timezone difference which gives us full manned coverage, for a bigger site I can imagine you would want to address that bit as well.



--

Wil

_____________________________________________________

Visit the new VMware developers wiki at http://www.vi-toolkit.com

| Author of Vimalin. The virtual machine Backup app for VMware Fusion, VMware Workstation and Player |
| More info at vimalin.com | Twitter @wilva
0 Kudos
Ken_Cline
Champion
Champion
Jump to solution

I read at: about the VMFS.

"Clustering - The second and vitally important feature of VMFS is support for sharing or clustering. VMFS is responsible for the coordination of multiple independent ESX servers safely reading and writing to the same block device. It is VMFS which handles the basic safe transfer of ownership of data from server to server during high level VI activities such as vmotion, DRS, storage vmotion, HA, etc." Comment in webinar: *"RDM is required for clustering when you have twoVMs on two different ESXi servers."

I would like to clearly understand the meaing of those statements. E.G. are we saying in a load balance situation you have more than one server accessing the same VM data on the same LUN? If so, how is that arbitrated? Surely DBMSes would be a special case.

They're talking about two different types of clustering. VMFS clustering is dealing with the clustering of ESX hosts to enable advanced VI features such as VMotion and HA.

VMFS is responsible for the coordination of multiple independent ESX servers safely reading and writing to the same block device.

Multiple hosts are accessing the same block device - NOT the same file.

BTW - this is one of the big problems Microsoft has implementing a "live migration" feature - NTFS cannot have multiple simultaneous writers...

RDM is required for clustering when you have twoVMs on two different ESXi servers.

This is refering to MSCS (or other inter-VM clustering technologies) and not VMware host clustering

Ken Cline

Technical Director, Virtualization

Wells Landers

TVAR Solutions, A Wells Landers Group Company

VMware Communities User Moderator

Ken Cline VMware vExpert 2009 VMware Communities User Moderator Blogging at: http://KensVirtualReality.wordpress.com/
0 Kudos
Texiwill
Leadership
Leadership
Jump to solution

Hello,

One of the things that I heard in the in the webinar was NOT to setup local VMFS volumes, not even for swap. I don't have FC available, but I do iSCSI and NAS. I have a kvm, the box is on the compat list, but I cannot get them to install a USB key to boot from, so I guess I'll need a small mirror to fire up the ESXi so I can hook the VMs on the iSCSI. Since we are talking software initiators, we're talking about moving the SCSI processing from the HBA to the CPU, making CPU and RAM all the more important. What say you about this?

Yes, That is exactly the case.

I read at: about the VMFS.

"Clustering - The second and vitally important feature of VMFS is support for sharing or clustering. VMFS is responsible for the coordination of multiple independent ESX servers safely reading and writing to the same block device. It is VMFS which handles the basic safe transfer of ownership of data from server to server during high level VI activities such as vmotion, DRS, storage vmotion, HA, etc."* Comment in webinar: **"RDM is required for clustering when you have twoVMs on two different ESXi servers."

The last statement is for Shared Disk Clusters from Within the VM. Not for storing VMs. You may not use local VMFS. Even so I always like having one for redundancy reasons. What happens if the storage array dies for some reason? I would have enough local disk to run the VMs from there if necessary.

I would like to clearly understand the meaing of those statements. E.G. are we saying in a load balance situation you have more than one server accessing the same VM data on the same LUN? If so, how is that arbitrated? Surely DBMSes would be a special case.

The VMFS is cluster aware, you will need to store your VMs on a Cluster aware filesystem, that could be NFS, or VMFS over iSCSI.

What NFS/iSCSI Servers are you considering using?


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.

Blue Gears and SearchVMware Pro Blogs: http://www.astroarch.com/wiki/index.php/Blog_Roll

Top Virtualization Security Links: http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
IT_Architect
Enthusiast
Enthusiast
Jump to solution

wila wrote: "Happens to me all the time, but if I'm the only one complaining it is not that likely to get fixed soon."

I tagged your thread and made a tongue-in-cheek case for vBulletin. I think you'll get a kick out of it. Personally, I think timeouts are one of the least of their problems. But if they go vBulletin it will fix that too.

Texiwill wrote: "The last statement is for Shared Disk Clusters from Within the VM."

Such as for the /home directories of web sites where when I change one page, the change is reflected by all the ESXi hosts?

*"Even so I always like having one for redundancy reasons. What happens if the storage array dies for some reason? I would have enough local disk to run the VMsfrom there if necessary."*

Interesting concept. It would also allow me to prototype with EXSi without changing the current storage method. I could then add iSCSI and NAS, clone the VM, and copy them over to quantify the impact is compared to local local. It could serve as a place to store VM backups as well. If I needed to get a file back from the backed up VM, I could add the VM to inventory, fire it up, and drag them to an NFS volume, and like you said, use it if there is an iSCSI problem.

"The VMFS is cluster aware, you will need to store your VMs on a Cluster aware filesystem, that could be NFS, or VMFS over iSCSI."

NFS? That's interesting. There is a lot more you can do with an NFS volume, and tools when something needs to be changed or goes wrong. Thinking out loud, NFS requires a host, but it also arbitrates access whereaswith VMFS the ESXi hosts would need to do that. Thus, throughput would probably be less, but not necessarily response time. For small reads it might be quicker and small writes might be too because of writecache in the controller. About all I would lose out on is RDMs it appears.

What NFS/iSCSI Servers are you considering using?

We're trying a server at SoftLayer. They have kvms built into the motherboards, I get better disk controllers, I have access to iSCSI, and NAS. There is hardware fail over and load balancing that can be managed from the hosting control panel, but this feature is expensive at our traffic levels. We have three gigabit network interfaces available, kvm, public, and private. I also can get portable IPs. Those hosted there say the iSCSI is faster than local drives.(I don'tbuy that, but I believe it's not a dog) iSCSI space is about the same price as a local SCSI mirror and NAS about half that. I haven't tested the speeds yet on either.

Thanks TONS for your help!

PS: I just came across SAN System Design and Deployment Guide. That should help me A LOT.

0 Kudos