VMware Cloud Community
RogerAudet
Contributor
Contributor

Future Intel Server board support???

Does anybody know what kind of criteria VMWare uses to determine which motherboards will and will not be supported with future updates?

Background: Trying to build a whitebox ESX4.0i server for home use. My vendor sent me a new S3420GP board (brand new model) with an Intel 3420 chipset. Nice board. It supports up to 32 GB of expensive RAM or 16 GB cheap RAM, but it doesn't work. Just hangs at "Initializing scheduler".

My options are, I can sent the motherboard back and buy a "certified" board like the S3200 or S3210 board (but these can only handle up to 8 GB RAM) or "hope" that a ESX4.02 or future build will handle this (and try out Xenserver until then Smiley Wink

0 Kudos
29 Replies
DSTAVERT
Immortal
Immortal

When you have questions you need to create a new post and explain the problem fully with the system you are trying to install to. Check the Hardware Compatibility List to see if you system is supported.

-- David -- VMware Communities Moderator
0 Kudos
felici
Contributor
Contributor

That is exactly the problem. There has been no solution to this thread from VMware/Intel. You can find the same analagous thread at the Intel community forums...

Its still the same thread.. same problem... no solution.

example:

http://communities.intel.com/message/79498#79498

The Intel S3420GP motherboard is in the SR1630GP barebones server, the intel entry/value server that is in distribution. The problem is that Intel/VMware have been touting the 'partnership' on this line of products, specifically announcing the motherboard/servers as being certified.

see the following page:

http://www.intel.com/cd/channel/reseller/asmo-na/eng/products/server/436106.htm

e.g.:

Certified in Q4: Intel® Server Board S3420GP, Intel® Server Systems SR1630GP and SR1630HGP

But unfortunately, the 'certified' doen't seem to follow common sense in that all the hardware is supported. It isn't.

Basically, the driver does not recognize the PCI-ID of one of the ethernet chipsets, and even when modified to recognize, the driver doesn't work properly.

Now that's strange, as the RHEL e1000 driver has no problem.

The mainline distribution including Ingram really PUSHED this platform in the reseller arena -


without full hardware support!

Its just not reasonable to call something a 'certified' platform with a caveat of 'not really'... we buy a server with 2 ethernet ports, especially a 1U, to meet bid specs, etc., play the safe route and pick one that is 'certified', and then for some reason expect that if you really want to use two ethernets, you need to add a PCIe card... which of course is not possible in 1 1U platform if you plan on doing something 'normal' like add a HCL-compliant RAID card like one from LSI.

Ingram, Intel, and VMware have been pretty well unresponsive on the issue. I have 6 of these servers, sitting around, since I had to buy different ones that did work.

Hopefully, if you are thinking of getting a sr1630gp or s3420 and need 2 ethernet ports, DONT go with this platform. Hopefully that will change.

0 Kudos
Srv02
Contributor
Contributor

Yes, thank you for your detailed reply ... this is exactly what i mean, I bought a certified hardware and I can not use the 2nd nic and the onboard intel matrix raid controller. Okay, I could have read the thing with the RAID controller before assuming that it would work, but I already start thinking about an alternative to esxi.

0 Kudos
daytripper
Contributor
Contributor

I have a S3420GP with an Intel X3450 running ESXi 4.0 update 1.

I've notice my system works fine without Intel Matrix RAID enabled. It would be nice if it did for redundancy, but ESXi 4.0 updt 1 appears to prefer accessing the two 1Tb drives as two separate datastores.

0 Kudos
jgodau
Contributor
Contributor

Hi all,

is there any update on this? Any chance of support (somehow) for the Intel 82578DM Network card?

We have some machines that only have this card in them that we'd love to get ESXi installed on.

Any thoughts or help appreciated.

Jack...

0 Kudos
Solid2489
Contributor
Contributor

I ordered a S3420GPLC and just hat to realize why one of my NICs is missing.

This is typical for VMware; they promote ESXi for free with the intention to get SMBs into it. But the truth is they don't really care about SMBs or simple don't have enough resources for entry level server hardware support. The web is full with people struggling with entry level server hardware realizing their hardware is not fully, unsupported or unbearable slow. And this is also the reason why more and more SMB people are looking into 2008 R2 Hyper-V, just like me!

Remember, when it comes to VMware: supported doesn't mean "fully" supported. I got an 800$ Adaptec raid card which is "supported" according to VMware HCL, but this controller doesn't show up in vSphere health status, because there is no monitoring support! Why would someone have a raid array without knowing its health status?

I'm so tired of this, after almost two years of ESXi hardware trouble, limitations and horrible performance: I'm going to give Hyper-V a shot!

0 Kudos
Solid2489
Contributor
Contributor

Wuha... today I decided to finally give Hyper-V a shot. I was quite mad as I struggled 3 days to restore a VM due to Backup Exec 2010 / VMWare up.

So... not only do I have all three NIC's running in Hyper-V but also full raid support as well as monitoring capabilities for my Adaptec controller. But the most amazing thing is.... I can easily get constant 980MBit/s Network throughput. In almost two years ESXi I never reached anything better then 300 to 400Mbit/s, most of the time it was like 100 to 200MBbit/s. In other words: Disk and Network I/O is WAY WAY WAY better in Hyper-V, AMAZING!

Sorry for OT.

Message was edited by: DSTAVERT to remove language slipup

0 Kudos
Rumple
Virtuoso
Virtuoso

Overall with a single server environment Hyper-V is a valid and acceptable solution. Where you suffer is with the memory management and enterprise recoverability features.

The main drawback right now is there is no oversubscribing memory (ie, you have 8gb of memory, that’s exactly 8 VM's with 1GB each, or 4 VM's with 2GB each). You will not be able to load that 5th VM as there is no memory available.

For the most part in small environments that’s really not a problem at all.

I have read about some new memory management stuff they are working on but I don't think its mature yet...still lots of problems with it from what I've read.

Personally I've never have a problem with running ESXi,ESX or Hyper-V on any server I've implemented but I never lowball the gear I implement, and I have years of enterprise experience implementing vmware ESX which helps a lot with avoiding some of the issues seen in smaller environments.

As long as you have something you can support and works for you, then at the end of the day..thats really what is important...the technology is just a means to an end...

0 Kudos
jgodau
Contributor
Contributor

Thanks to "Hardworker" there are now drivers you can use to get the Intel 82578DC 8086:10f0 running, it also works for the Intel 82578DM 8086:10ef and possibly for some other Intel NICs.

See: http://www.vm-help.com/forum/viewtopic.php?f=12&t=2194 and http://www.vm-help.com/forum/viewtopic.php?f=12&t=1735

Cheers

Jack...

0 Kudos
Technopc
Contributor
Contributor

Hi

I just would like to if this board Intel S3420GP that you managed to fix the bios by updating it has the feature of Intel VT and if enabled you are able to run 64bit VM

0 Kudos