I have 2 new Dell servers with 1.8TB storage.
I have done the following
- plugged in 2 network cables and enabled both ports as NIC management ports (Server has 4 ports)
- Join domain, set relavent Domain users as administrators (myself)
When I login using using the viClient on my notebook I see a large warning saying that no persistant storage is found & no log location.
i. I added the []scratch/log to the syslog folder (found settings on Internet).
ii. I clicked on the link to create data-stores which creates a pop-up
iii. In the pop-up I choose the following:
a. Disk/Lun
b. Dell local disk, Non-SSD, 1.8TB (the other is 100GB, I presume is for the ESXi)
c. VMFS-5 (Other option is VMFS-3)
d. Then it loads the current layout (There is the word loading......)
After 3 minutes, the viClient lost connection and tries to reconnect.
I can ping the Server IP address but I cannot seem to login via the viClient (it goes connecting......)
After a while, the viclient is able to connect again with the same large warning, saying that no persistant storage is found.
Kindly assist as there is no way I can add storage to the new server.
i. I added the []scratch/log to the syslog folder (found settings on Internet).
Can you provide the link where you have followed the steps. From what I am able to understand is that your vi client generates hostd.log which resides on /scratch/log. If your /scratch/log partition is not configured properly, then hostd will crash and restart after some time.
The config for syslog.
The alternative is to leave the field blank which causes viClient to give the warning message.
adrianych wrote:
I have 2 new Dell servers with 1.8TB storage.
Exactly what servers and RAID card?
This could well be an HCL issue.
Dell R620, dunno which RAID card but it runs 8x 2.5" SAS HDD
Can you please try the first method from this link http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=103369...
Also, do check on hostd.log under /var/log/messages if you are getting an error
I am not getting any error messages, just that the viClient dies while I adding 1st storage
I'm seeing the exact same issue with my New Dell R620's. I can do anything else, but as soon as I try to add the local disk's to storage it times out and I lose my connection. Have you contacted Dell or Vmware yet?
I went ahead and opened a case with Vmware and they found the issue. The issue is with the partition table. Dell included a win95 partition on the disks and vsphere is unable to delete it and is crashing the client instead of giving a usable error . I think it's the diagnostic partition. I have esxi installed on the sd cards so I just deleted the partitions on the local disks.
To delete the partitions login to the host via putty ssh.
Enter esxcfg-scsidevs -c
find the dell disk giving you the issue /vmfs/devices/disks/naa.xxxxxxxxxxxxxxxx
Enter fdisk -u /vmfs/devices/disks/naa.xxxxxxxxxxxxxxxx
Enter 2
Enter d
Enter 1
Enter w
Then go back and try to add the datastore again.
Hope this helps.
For me, the wried thing is that Dell actually split the RAID 5 into 2 drives 180GB & 1.8TB.
I think the ESX was pre-installed in the 180GB logical drive.
There was no issue adding the 180GB "disk" which I would rather not.
I did try to delete the 1.8TB and re-create the 1.8TB logical drive from the RAID Card. The viClient still died.
I went to the RAID card and deleted the 1.8TB and created 3 logical drives 500GB + 500GB + 700GB (total of 1.8GB).
I was able to add the 1st 500GB then extend by adding the next 500GB.
I have not yet tried adding the 3rd 700GB.....
We had this same issue too on our brand new R620 servers. We have 30+ servers to depoloy out so I didn't really want to have to run the suggested commands on every box. Spolk to Dell support and they didn't even know what that 2GB partition was for, but it was the one causing the issue.
Instead of runnng the above commands to wipe the drive you can also run an Fast Initilize on the RAID controller. Wipes the RAID nice and fast and does the job fine.
This was way easier for us as we could get to the RAID config right from the BIOS, and we needed to go into the BIOS anyways to change some settings.
Hope this helps someone else save some time.
Go into Device Settings > Integrated RAID Controller 1 > Virtual Disk Management > Select Virtual Disk Operations.
Actually i found the solution quite some time back but I seem to have lost the link....
It was something like another FS (NTFS or FAT) was already encoded by ESX(i) default.
Solution was to SSH into the ESX(i) and delete the MBR or something like that.....then the viClient was able to add the "disks" else it will timeout when loading the disk details.
For me it was mostly copy and paste + some eyeball+finger copying so pratically nothing stays in my brain....
THANK YOU SO MUCH.
TRIED THIS METHOD AND IT WORKED LIKE A CHARM. FAST INITIALIZE WAS VERY QUICK AND THEN I WAS ABLE TO ADD THE STORAGE.
-MIKE
This works only if Dell was smart enough to partition the HDD into 2 volumes under RAID. Else it will destroy the entire RAID....
I found the link as a workaround solution which was posted in another thread....
http://www.virtuallyghetto.com/2011/07/how-to-format-and-create-vmfs-volume.html
The did the trick! Confirmed as a EASY fix.
Thanks so much for taking the time to document this.
Well...Pls note that Dell gave a very non-logical solution :
- From BIOS, log into the RAID controller and delete all RAID Virtual Disks, re-create the Virtual Disk.
Please do not try the above unless you have downloaded the Dell VMware ESXi 5.1 iso (from Dell website).
1. The above "solution from Dell" will also wipe out the Dell Factory Pre-configured or Pre-installed VMware ESXi 5.x !
2. There are some Dell 12 Generation Servers sucj as the R320, 520 & 620 that do not support VMware ESXi 5.1, at least thats what I call it as they will have "No NIC found" error if you install VMware ESXi 5.1 iso downloaded from VMware site.
I will be in meetings throughout most of today and will not have access to my email. For any support and sales queries, please direct your email to helpdesk@daraco.com.au.
All our server had the internal SD cards setup so the HDD data didn't matter to us at all. Wipping the config on them was just fine.
The entire reason why it was hanging was becase there was a partition table entiry that ESXi didn't like, or couldn't read. Dell does a bunch of test on the drives before they go out so and I guess because our ESXi image was on SD cards Dell didn't re-configure the drives.
I also just came across the same issue on a brand new Dell PowerEdge R720 (internal SD cards also), but this server would only work when you did a 'Full Initilize'. The fast initilize didn't work at all. -- encase someone else ran across this issue too.
In the end all works fine.
†Shawn