VMware Cloud Community
adrianych
Enthusiast
Enthusiast

Need help urgently - viClient timeout when adding Storage

I have 2 new Dell servers with 1.8TB storage.

I have done the following

- plugged in 2 network cables and enabled both ports as NIC management ports (Server has 4 ports)

- Join domain, set relavent Domain users as administrators (myself)

When I login using using the viClient on my notebook I see a large warning saying that no persistant storage is found & no log location.

i. I added the []scratch/log to the syslog folder (found settings on Internet).

ii. I clicked on the link to create data-stores which creates a pop-up

iii. In the pop-up I choose the following:

a. Disk/Lun

b. Dell local disk, Non-SSD, 1.8TB (the other is 100GB, I presume is for the ESXi)

c. VMFS-5 (Other option is VMFS-3)

d. Then it loads the current layout (There is the word loading......)

After 3 minutes, the viClient lost connection and tries to reconnect.

I can ping the Server IP address but I cannot seem to login via the viClient (it goes connecting......)

After a while, the viclient is able to connect again with the same large warning, saying that no persistant storage is found.

Kindly assist as there is no way I can add storage to the new server.

Reply
0 Kudos
18 Replies
zXi_Gamer
Virtuoso
Virtuoso

i. I added the []scratch/log to the syslog folder (found settings on Internet).

Can you provide the link where you have followed the steps. From what I am able to understand is that your vi client generates hostd.log which resides on /scratch/log. If your /scratch/log partition is not configured properly, then hostd will crash and restart after some time.

Reply
0 Kudos
adrianych
Enthusiast
Enthusiast

The config for syslog.

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=101662...

The alternative is to leave the field blank which causes viClient to give the warning message.

Reply
0 Kudos
Josh26
Virtuoso
Virtuoso

adrianych wrote:

I have 2 new Dell servers with 1.8TB storage.

Exactly what servers and RAID card?

This could well be an HCL issue.

Reply
0 Kudos
adrianych
Enthusiast
Enthusiast

Dell R620, dunno which RAID card but it runs 8x 2.5" SAS HDD

Reply
0 Kudos
zXi_Gamer
Virtuoso
Virtuoso

Can you please try the first method from this link http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=103369...

Also, do check on hostd.log under /var/log/messages if you are getting an error

Reply
0 Kudos
adrianych
Enthusiast
Enthusiast

I am not getting any error messages, just that the viClient dies while I adding 1st storage

Reply
0 Kudos
dustinn3
Contributor
Contributor

I'm seeing the exact same issue with my New Dell R620's. I can do anything else, but as soon as I try to add the local disk's to storage it times out and I lose my connection.  Have you contacted Dell or Vmware yet?

Reply
0 Kudos
dustinn3
Contributor
Contributor

I went ahead and opened a case with Vmware and they found the issue.  The issue is with the partition table.  Dell included a win95 partition on the disks and vsphere is unable to delete it and is crashing the client instead of giving a usable error Smiley Wink.  I think it's the diagnostic partition.  I have esxi installed on the sd cards so I just deleted the partitions on the local disks.

To delete the partitions login to the host via putty ssh.

Enter esxcfg-scsidevs -c

find the dell disk giving you the issue /vmfs/devices/disks/naa.xxxxxxxxxxxxxxxx

Enter fdisk -u /vmfs/devices/disks/naa.xxxxxxxxxxxxxxxx

Enter 2

Enter d

Enter 1

Enter w

Then go back and try to add the datastore again.

Hope this helps.

Reply
0 Kudos
adrianych
Enthusiast
Enthusiast

For me, the wried thing is that Dell actually split the RAID 5 into 2 drives 180GB & 1.8TB.

I think the ESX was pre-installed in the 180GB logical drive.

There was no issue adding the 180GB "disk" which I would rather not.

I did try to delete the 1.8TB and re-create the 1.8TB logical drive from the RAID Card. The viClient still died.

I went to the RAID card and deleted the 1.8TB and created 3 logical drives 500GB + 500GB + 700GB (total of 1.8GB).

I was able to add the 1st 500GB then extend by adding the next 500GB.

I have not yet tried adding the 3rd 700GB.....

Reply
0 Kudos
eysfilm
Contributor
Contributor

We had this same issue too on our brand new R620 servers.  We have 30+ servers to depoloy out so I didn't really want to have to run the suggested commands on every box.  Spolk to Dell support and they didn't even know what that 2GB partition was for, but it was the one causing the issue.

Instead of runnng the above commands to wipe the drive you can also run an Fast Initilize on the RAID controller.  Wipes the RAID nice and fast and does the job fine.

This was way easier for us as we could get to the RAID config right from the BIOS, and we needed to go into the BIOS anyways to change some settings.

Hope this helps someone else save some time.

Go into Device Settings > Integrated RAID Controller 1 > Virtual Disk Management > Select Virtual Disk Operations.

  1. Under Virtual Disk Operations, ensure ‘Fast Initialization’ is selected.
  2. Select ‘Start Operation’.
  3. ‘Confirm’ that all data will be lost.
  4. Select ‘Yes’.
  5. Click ‘OK’ on the confirmation page.
    Reply
    0 Kudos
    adrianych
    Enthusiast
    Enthusiast

    Actually i found the solution quite some time back but I seem to have lost the link....

    It was something like another FS (NTFS or FAT) was already encoded by ESX(i) default.

    Solution was to SSH into the ESX(i) and delete the MBR or something like that.....then the viClient was able to add the "disks" else it will timeout when loading the disk details.

    For me it was mostly copy and paste + some eyeball+finger copying so pratically nothing stays in my brain....

    Reply
    0 Kudos
    Mike_Deardurff
    Enthusiast
    Enthusiast

    THANK YOU SO MUCH.

    TRIED THIS METHOD AND IT WORKED LIKE A CHARM.  FAST INITIALIZE WAS VERY QUICK AND THEN I WAS ABLE TO ADD THE STORAGE.

    Smiley Happy

    -MIKE

    Reply
    0 Kudos
    adrianych
    Enthusiast
    Enthusiast

    This works only if Dell was smart enough to partition the HDD into 2 volumes under RAID. Else it will destroy the entire RAID....

    Reply
    0 Kudos
    adrianych
    Enthusiast
    Enthusiast

    I found the link as a workaround solution which was posted in another thread....

    http://www.virtuallyghetto.com/2011/07/how-to-format-and-create-vmfs-volume.html

    Reply
    0 Kudos
    jarededelson
    Contributor
    Contributor

    The did the trick! Confirmed as a EASY fix.

    Thanks so much for taking the time to document this.

    Smiley Happy

    Reply
    0 Kudos
    adrianych
    Enthusiast
    Enthusiast

    Well...Pls note that Dell gave a very non-logical solution :

    - From BIOS, log into the RAID controller and delete all RAID Virtual Disks, re-create the Virtual Disk.

    Please do not try the above unless you have downloaded the Dell VMware ESXi 5.1 iso (from Dell website).

    1. The above "solution from Dell" will also wipe out the Dell Factory Pre-configured or Pre-installed VMware ESXi 5.x !

    2. There are some Dell 12 Generation Servers sucj as the R320, 520 & 620 that do not support VMware ESXi 5.1, at least thats what I call it as they will have "No NIC found" error if you install VMware ESXi 5.1 iso downloaded from VMware site.

    Reply
    0 Kudos
    Josh26
    Virtuoso
    Virtuoso

    I will be in meetings throughout most of today and will not have access to my email. For any support and sales queries, please direct your email to helpdesk@daraco.com.au.

    Reply
    0 Kudos
    eysfilm
    Contributor
    Contributor

    All our server had the internal SD cards setup so the HDD data didn't matter to us at all.  Wipping the config on them was just fine.

    The entire reason why it was hanging was becase there was a partition table entiry that ESXi didn't like, or couldn't read.  Dell does a bunch of test on the drives before they go out so and I guess because our ESXi image was on SD cards Dell didn't re-configure the drives.

    I also just came across the same issue on a brand new Dell PowerEdge R720 (internal SD cards also), but this server would only work when you did a 'Full Initilize'.  The fast initilize didn't work at all.  -- encase someone else ran across this issue too.

    In the end all works fine.

    †Shawn

    Reply
    0 Kudos