VMware Cloud Community
Bart_VK
Enthusiast
Enthusiast

Altiris deployment ESX server in a VLAN tagged environment

I want to deploy an ESX server on BL480c servers where the service console must be in a separate VLAN (vlan id 2). To deploy the ESX server I am using the Altiris 6.5 deployment server. This deployment server is also situated in the same VLAN. Both the deployment server and the blade servers are connected to the same GbE2C blade interconnect switch. The deployment server port is VLAN 2 fixed and the network ports for the ESX servers are VLAN tagging enabled.

To get the ESX service console in the appropiate VLAN I am using VLAN tagging on the ESX server.

When I deploy the ESX server, the server gets an IP from DHCP and loads the boot image. When the boot image has loaded and started, the ESX server can not longer make a connection to the deployment server. An ifconfig only returns the loopback interface information.

However when I disable vlan tagging on the switch for the esx server ports I get connection to the deployment server and the server continues to deploy. Since my default.cfg scripts points to vlan 2 for the service console I loose connection when the service console interface is being configured. This is logical because vlan tagging is on that moment disabled on the switch. So disabling vlan tagging on the switch is not a valid option for an automatic installation of my esx servers.

Does somebody know the solution for this problem or does somebody know how to avoid this while keeping it an automated installation ?

0 Kudos
2 Replies
HBBC
Contributor
Contributor

This may or may not be relevant to your problem, but hopefully it'll be useful anyway!

I've just deployed VI3 to BL480s and couldn't get a full successful \*customised* installation via the default RDP deployment scripts at all.

The underlying issue was that the install would start, but then halfway through, network connectivity was lost and I'd be stuck at the install screen prompted for IP config info, none of which would work.

It revolves around starting the install using one pair of onboard NICs, and then switching to the other pair halfway through which confused things completely.

Ultimately, I just let it install using the defaults and reconfigured the service console etc. afterwards myself.

I did read a post here that it was a "known issue" and that the workaround was "to disable the multifunction NIC" but I could never find how to do that so in the end just gave up and lived with the DHCP install. It just wasn't worth the effort for two servers.

I also couldn't seem to customise the default install file with hard-coded IP details without the install complaining about an "errors" in the file, despite very carefully copying configs from elsewhere.

Once I've got these live, I'll "play" with a couple other servers later on and see if I can get to the bottom of it.

My final issue, which I'll make time for later this week, is installing HP agents on the ESX boxes as the default that came with my install (7.51 agents) won't go onto ESX 3.01 but that's another story...

Regards,

Paul

0 Kudos
Texiwill
Leadership
Leadership

Hello,

Sounds like a similar problem to http://www.vmware.com/community/thread.jspa?threadID=86978&tstart=30. I think there is a DHCP/PXE issue with vlan tags.

Best regards,

Edward

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos