We virtualized a number of NW6.5 servers (20), and they are all having problems currently with time synchronization. Time will drift quite rapidly within 2 or 3 hours. We were initially configured to strictly do configured time sources either NTP or just IP addresses. Then we changed the VMX file to utilize the VMware tools (i.e. tools.syncTime = "true"). Time still drifted.
What is the correct config to get time back in sync?
Unfortunately the vmdesched is only for Windows and Linux at the moment. I called Novell on this issue, and their suggestion was to leave the tools sync option to true. Configure one server as a single reference server, and have every other server be secondaries and use NTP with timesync.nlm to that single reference.
It has helped some. It is no longer hours off, and it does not drift in a few servers.
You need to have one server NOT on a virtual machine to be the single reference, and set EVERY other server up as a secondary. Timesync.nlm will handle the rest.
It is a bad idea to have ANY timesource on a VM if you ask me.
Yep, that is what we did. It worked for 9 of the 12 servers (all identically configured). The only thing that worked for the final three was to reduce the polling interval to 10 seconds, which is exactly opposite of what Rob recommends here (http://www.robbastiaansen.nl/vmware/vmware_netware_tips_timesync.html)
I hope this helps someone else.
Thanks bowulf, that was the only way I managed to get NW 65 SP6 "in time" with single ref NW server on outside of ESX
It still stays ~ -150 miliseconds , but it is constant & does not drift any more to the future
I have experienced the same problem with identically configured servers running Netware 6.5 SP7. I too tried various configuration changes to the timesync parameters (Increasing polling interval, etc) and changed the LAN drivers as recommended by other experiencing the same issues. However the advice given here to lower the polling interval is the only thing that has worked for me. Has anyone slowly increased the 10 sec interval to find the breaking point so to speak?
I took a different approach and it seemed to work for me. I set all servers to get their time from 1 NTP server, not a Netware server. I then set the Misc hard time max to 200 and misc hard time min to 100. I hardly ever had a server out of timesync after that.
Off subject, but how did you get your 6.5 machines virtualized? Did you migrate them or perform a fresh build.
If you migrated them, did you image them first? How was it getting the NIC and SCSI driver replaced.
Anyone else who can help, any help is greatly appreicated.
In answer to your question I have used both methods to install or migrate Netware Servers. As to which method is appropriate for you that would depend on your environment and in either case there will be pros and cons. Personally I prefer to do a clean install and migrate user data and applications, that way the platform I am migrating to will have been proven stable/reliable prior to going live with problems such as timesync etc resolved. If you are thinking about imaging have a look at
Changing the SCSI and NIC driver is straight forward.
SCSI Controller for VMware is LSI Logic PCI-X Ultra320. The Netware driver for this is LSIMPTNW.HAM (Host Adapter Module) and for the attached disks SCSIHD.CDM (Custom Device Module). Both of these drivers are loaded from C:\NWSERVER\STARTUP.NCF and this is where you make any required amendments. Incidentally the drivers are located in the C:\NWSERVER\DRIVERS directory already.
The NIC for VMware is an AMD PCnet. The most popular driver for this appears to be the CNEAMD.LAN while PCNTNW.LAN also works OK. Load or change the network drivers from INETCFG.
Hope this is of some use to you. If you require any further help I would be more that willing to assist. I have been working with Netware since version 3.11 and am certified as a CNE4,5,6 and MCNE.
I used to work as an SE for Novell. The preferred method would be to build a server on ESX in a bogus tree and use the consolidation and migration utility to migrate to the new server. Now, real life, if you don't care about keeping the server name and IP the same and you don't have a ton of mapping in your login scripts, then you can build a new server in your existing tree and use the consolidation and migration utility to move the data. I'd use this utility because it keeps the NSS rights. Otherwise you will need to use trustbar to move the rights over. Another option is to build an OES Linux server and use the consolidation utility to copy the files over.
Sector - Thank you for this information and sorry for taking so long to reply.
Well, I guess the good news is that the drivers are already located on the OS, so I don't need to use any sort of boot disk or PE environments to get them replaced.
I've been eyeballing portlock, however, it is $450 per migration. I'm going to try Acronis with a test box tomorrow. If you have a moment, could you post the load lines for the new drivers in the STARTUP and AUTOEXEC.NCF(or, what they files should look like)?
DCTony recommeded the server migration utility. That is another option I would like to consider as well. From someone with no novell experience, is that tool risky? I read in the documentation that you can hose your source if the migration fails.
If you use the migration, meaning that the new server will have the same name and IP of the old server. The LAST step it takes is to remove DS on the source and down the source so that you don't have two servers in the tree with the same name. You can back out at any point prior to this.So, yes there is some risk, but your data is still there and worse case scenario is you can restore NDS on the old server. If you do a dsrepair -rc prior to the migration you can restore that as well.
Good deal. Thanks for the info.
I'll print out the info on the smtu and give both methods a whirl. Just for verification, there is nothing to do on the virtual once it completes (reconfigure apps...etc)?
Here is an example of an unaltered startup.ncf taken from a VM machine.
######## End PSM Drivers ########
######## End CDM Drivers ########
LOAD IDEATA.HAM SLOT=10004
LOAD LSIMPTNW.HAM SLOT=2
######## End HAM Drivers ########
LOAD KEYB.NLM United Kingdom
LOAD CHARSET.NLM CP850
As for other tasks following the initial installation of the server here's a short list in no particular order.
Install the VMware tools for Netware.
Load INETCFG at the console to transfer the load commands out of the autoexec.ncf and enable the use of this tool for future management of LAN drivers and network settings. This step requires a reboot so best to get it done upfront before the server goes live.
Ensure SLP and timesync are correctly configured. These steps should be performed manually during the installation procedure before the new server joins the tree but I have seen many installations where these have been left to chance or incorrectly setup.
Configure remote console access if required using Rconag6.
Add a replica if required.
Tidy up the autoexec.ncf and remove any non-required load commands. Have a look at this article for help http://wiki.novell.com/index.php/NetWare%27s_AUTOEXEC.NCF
I was able to get startup.ncf modified, but do to the dell utility partition, I copied C: volume separate from SYS. I now have 2 vmdk volumes. That means I need the scsihd.chm loaded too now in startup.ncf correct? (from your first post). If so, do I do the same (Load=scsihd.chm)
Thanks in advance and let me know if you want me to login to the site to give you points.
This is going to sound incredibly stupid, but I had a GroupWise box (vm) that was drifting constantly...Tried everything....the server was being killed by high processor utilization (100%). Finally, I configured another server as a reference server, then pointed the troubled GroupWise box at it in the configured timeservers and WHAM....Problem solved. Not the cleanest fix, but it did work.