spectVM's Posts

My favorite in all this is what they have done for ESXi 6. They have finally listened to us and gave a desktop client. Wait till you see it. :smileyplain: The a**holes simply wrapped IE in a ... See more...
My favorite in all this is what they have done for ESXi 6. They have finally listened to us and gave a desktop client. Wait till you see it. :smileyplain: The a**holes simply wrapped IE in a desktop shell and are now calling this a desktop client. If this isn't the true definition of adding insults to injuries, I am not sure what else is. Yes folks, we will soon have the pleasure of installing a useless desktop application that wraps a browser for us. Basically, we are too incompetent to open up a browser and type in the url to web client.
Bleeder wrote: Well, I don't think CloudForms is using Virgo    The Virgo project was almost terminated/archived a year ago.  (See [virgo-dev] Still looking for a lead for Virgo). 1. ... See more...
Bleeder wrote: Well, I don't think CloudForms is using Virgo    The Virgo project was almost terminated/archived a year ago.  (See [virgo-dev] Still looking for a lead for Virgo). 1. Virgo is not dead. 2. Virgo is a backend OSGi server. CloudForms calls the vSphere API, which exposes services running on the Virgo OSGi server. The real issue is a mix of over complex object state management, heavy SOAP payloads and protocol, and poor UI workflow. I am a fan of Rich Internet Application (RIA) UIs, and you can see some of my personal work here: http://www.flexraid.com/portfolio-items/transparent-raid/?portfolioID=62 For my implementation, I quickly understood how light things needed to be at all layers: Light database Light service layer Light REST/RPC layer Intelligent client-side state management
DVSchwartz wrote: ... Anyway, just FYI, we had a POC of Red Hat CloudForms, which is a kind of universal virtualization management console for different virtual environments.  It can con... See more...
DVSchwartz wrote: ... Anyway, just FYI, we had a POC of Red Hat CloudForms, which is a kind of universal virtualization management console for different virtual environments.  It can connect to AWS, OpenStack, vSphere, and several others and control them via a common interface with automated workflows.  When we hooked it into our vSphere, it was able to manage everything much faster and more smoothly than VMware's own web client! So, even a 3rd party management console uses vCenter's APIs more efficiently than whoever wrote the VMware web interface.  That tells me that it needs a LOT of optimization... That's the thing. I believe CloudForms uses Flex too for the UI. So, it is more of an implementation issue that the technology itself. In any case, I have stuck to ESXi 5.1 because of this fiasco. What's worst is the resources required to run the Web client. Compared to the thick client the Web client does suck in every way.
@mldmld I went through the same thing. Called MS for support and spent an hour troubleshooting the install only to discover that some files were corrupted as stated above. Again and as a test, ... See more...
@mldmld I went through the same thing. Called MS for support and spent an hour troubleshooting the install only to discover that some files were corrupted as stated above. Again and as a test, create a new VM, set the bios to EFI, and do a new clean installation.
I just chose the 2012 option.
I have W2k12R2 installed on ESXi 5.1U1. However, the first install was partially corrupted. Everything appeared to be fine under I tried to add server features and found that key powershell files... See more...
I have W2k12R2 installed on ESXi 5.1U1. However, the first install was partially corrupted. Everything appeared to be fine under I tried to add server features and found that key powershell files were corrupted. I re-installed and things seem to be fine for now. If anything was corrupted again, I have not encountered them yet. One thing you do have to do is set the bios EFI.
Thanks for the reply. I think I have found a workaround. First, that link does not address my particular issue. See, this is a brand new vCenter appliance install and configuration. Only o... See more...
Thanks for the reply. I think I have found a workaround. First, that link does not address my particular issue. See, this is a brand new vCenter appliance install and configuration. Only one host of the four hosts to be added was added. The issue again is that each ESXi host is a VRTX blade (M620), which has access to the datastore created on the shared VRTX storage. Basically, each host is mounting the same shared datastore (the single datastore created on the shared storage), which works fine except for vCenter complaining when importing the hosts. In any case, my resolution was as follow: - Add the first host with the datastore attached and mounted - Unmount the datastore and detach the shared controller from the other hosts before adding them in vCenter - Re-attach the shared controller and mount the datastore through vCenter after the hosts were added - Re-configure each host for vSphere HA if needed Thanks.
All, I have two VRTX to be used for lab purposes that I am currently configuring. The VRTX feature 4 blades with a shared storage infrastructure. Each blade has ESXi 5.5 installed on it. I... See more...
All, I have two VRTX to be used for lab purposes that I am currently configuring. The VRTX feature 4 blades with a shared storage infrastructure. Each blade has ESXi 5.5 installed on it. I have configured the shared storage on the VRTX and all of the blades can access it just fine. The issue I am currently facing is when adding the ESXi hosts to vCenter for management. Adding the first host goes without a hitch. However, adding any subsequent host fails because vCenter finds datastore attached to the hosts to have the same ID. The error message is (see the attached screenshot): "Datastore 'Primary-Shared-Storage' conflicts with an existing datastore in the datacenter that has the same URL (ds://vmfs/volumes/xxxxx/), but is backed by different physical storage". Anyone knows how to resolve this? Thanks.
I have the same question. I have setup a VM with a physical RDM to an SSD. The physical RDM was created through the vSphere client as it sees the disk as supported for RDM. The OS is Windows 8.... See more...
I have the same question. I have setup a VM with a physical RDM to an SSD. The physical RDM was created through the vSphere client as it sees the disk as supported for RDM. The OS is Windows 8.1. Windows says trim is enabled, which is bogus. It says the same for a W2k12R2 install with a VMDK disk. I tried TrimCheck, which says trim is not enabled. The disk is described as "ATA Samsung SSD 840 SCSI Disk Device". What gives?
@vlho Thanks for that link. Local storage devices often do not support VPD page 0x83 , and thus cannot be used for Raw Device Mappings (RDMs). The content of page 0x83 is used as a uniqu... See more...
@vlho Thanks for that link. Local storage devices often do not support VPD page 0x83 , and thus cannot be used for Raw Device Mappings (RDMs). The content of page 0x83 is used as a unique identifier for the device. For more information, see Creating Raw Device Mapping (RDM) is not supported for local storage (1017530). So basically, VPD support is key. However, I am thinking this is purely to avoid the potential of mixing devices and sending a command to the wrong device. Furthermore, I am guessing VMware is trying not to rely solely on disk serial numbers for identification. VPD combines various unique identifiers into one that is even more unique and fit for cluster usage. If addressing is the only reason, then creating the RDM manually is just as safe as the RDM created through the vSphere client for as long as there is only one device per channel as it is in the case of SATA ports. Opinions?
I have opted to doing physical RDM with the LSI2008 for now. It would be great though to get visibility into the technicals to understand the risks. I have run physical RDM on Local SATA disks f... See more...
I have opted to doing physical RDM with the LSI2008 for now. It would be great though to get visibility into the technicals to understand the risks. I have run physical RDM on Local SATA disks for years using the vmkfstools approach. However, I am a little leery this time around since I will be doing more low level access on the disks on this new setup. It will be great if someone can shine in some technicals. Thanks.
@Abhilash That 4GB + overhead is something I would much rather not have imposed. No one is saying vCenter is useless. It is great when needed. Giving me an OS + DB + app layer in order to manag... See more...
@Abhilash That 4GB + overhead is something I would much rather not have imposed. No one is saying vCenter is useless. It is great when needed. Giving me an OS + DB + app layer in order to manage my single box is overkill and should be entirely optional. I still remember the days when I used to run several VMs on ESXi on a Thinkpad with only 2GB of RAM.
Abhilash wrote: ... VMware is not trying to make you buy $560 upgrade. Their main goal is to go independant of Windows platform. The now vSphere client(C#) client has a dependency on window... See more...
Abhilash wrote: ... VMware is not trying to make you buy $560 upgrade. Their main goal is to go independant of Windows platform. The now vSphere client(C#) client has a dependency on windows. ... Almost believable argument. However, we all know that is a pile of you know what. The client is already written and dependency on Windows isn't evil enough to impose the resources the web client needs. Those that have small low power compute units with 8-16GB of RAM are being pushed out.
Andre, thanks for the reply. The question now is why does it matter to ESXi as far as physical RDM? Clearly, the only difference is the disk controller. I can use vmkfstools to create the phy... See more...
Andre, thanks for the reply. The question now is why does it matter to ESXi as far as physical RDM? Clearly, the only difference is the disk controller. I can use vmkfstools to create the physical RDMs for the disks on the ICH10, but there is a mental comfort in being able to do it through the UI. Is physical RDM safer if done through the LSI2008 than through the ICH10 controller?
Hi guys, Using ESXi 5.1U1, I am able to attach disks located on an LSI2008 as physical RDM to my VMs. However, the option is grayed out for disks located on the motherboard Intel ICH10 AHCI c... See more...
Hi guys, Using ESXi 5.1U1, I am able to attach disks located on an LSI2008 as physical RDM to my VMs. However, the option is grayed out for disks located on the motherboard Intel ICH10 AHCI controller. Why is that? Thanks.
Posted in the wrong thread? I don't see how your reply relates to my posted issue.
Well, I ended up using vmfs-fuse to recover the data: VMFS: Unsupported version 5 - How to mount VMFS5 on Ubuntu |My Technical Blog It truly sucks that ESXi itself could not read the disk.
Hey guys, I need to some help pretty bad here. I have multiple datastores most of which are VMFS 5 except for one which is VMFS 3. Today I powered down my box to check something in the bios (... See more...
Hey guys, I need to some help pretty bad here. I have multiple datastores most of which are VMFS 5 except for one which is VMFS 3. Today I powered down my box to check something in the bios (no change made), but when the system got back up, the VMFS 3 datastore was unreadable. I then followed some online tutorial and set try to fix the partition table. However, I set the partition table to GPT when I think the VMFS 3 store was MBR. My steps: 1. partedUtil getUsableSectors /vmfs/devices/disks/naa.50015179590f8c3f This gave me a message along the line of no partition or invalid partition. Can't remember. 2. partedUtil setptbl /vmfs/devices/disks/naa.50015179590f8c3f gpt "1 2048 4123456 AA31E02A400F11DB9590000C2911D1B8 0" Then I did the above as per an online tutorial and forgetting that my datastore was VMFS3 and MBR. 3. partedUtil getUsableSectors /vmfs/devices/disks/naa.50015179590f8c3f This gave me a result of 312581774 4. partedUtil setptbl /vmfs/devices/disks/naa.50015179590f8c3f gpt "1 2048 312581774 AA31E02A400F11DB9590000C2911D1B8 0" Then I did part 4 - again following the tutorial Clearly these steps were the wrong thing to do for my disk. Can anyone help with fixing all this? Below is the affected disk. It shows the disk as mounted, but ESXi is not able to find the existing volume and mount the VMs on it. Also, here is the KB I followed: VMware KB: Recreating a missing VMFS datastore partition in VMware vSphere 5.0/5.1/5.5 More info (following http://virtuallyhyper.com/2012/09/recreating-vmfs-partitions-using-hexdump/ ~ # hexdump -C -n 512 /vmfs/devices/disks/naa.50015179590f8c3f 00000000  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................| * 000001b0  00 00 00 00 00 00 00 00  00 00 00 00 1d 9a 00 00  |................| 000001c0  01 00 ee fe ff ff 01 00  00 00 af 9e a1 12 00 00  |................| 000001d0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................| * 000001f0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 55 aa  |..............U.| 00000200 ~ # hexdump -C -s 65536 -n 512 /vmfs/devices/disks/naa.50015179590f8c3f 00010000  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................| * 00010200 ~ # hexdump -C -s 1114112  -n 512 /vmfs/devices/disks/naa.50015179590f8c3f 00110000  0d d0 01 c0 03 00 00 00  11 00 00 00 01 1a 00 00  |................| 00110010  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................| 00110020  00 00 00 00 00 00 00 00  00 00 00 00 00 00 43 56  |..............CV| 00110030  50 4f 39 34 38 31 30 31  32 36 31 36 30 41 47 4e  |PO94810126160AGN| 00110040  20 20 49 4e 54 45 4c 20  00 00 00 00 00 00 00 00  |  INTEL ........| 00110050  00 00 00 00 00 00 00 00  00 00 00 02 00 00 00 82  |................| 00110060  14 43 25 00 00 00 01 00  00 00 54 02 00 00 53 02  |.C%.......T...S.| 00110070  00 00 03 00 00 00 00 00  00 00 00 00 10 01 00 00  |................| 00110080  00 00 28 ff 64 4b fe 02  43 47 97 22 00 e0 ed 0d  |..(.dK..CG."....| 00110090  0b 0e 1c 98 65 d6 6d 7e  04 00 83 02 3e b0 4b f0  |....e.m~....>.K.| 001100a0  04 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................| 001100b0  00 00 00 00 00 00 00 00  00 00 c5 ae 95 01 00 00  |................| 001100c0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................| * 00110200 ~ # fdisk -lu /vmfs/devices/disks/naa.50015179590f8c3f *** *** The fdisk command is deprecated: fdisk does not handle GPT partitions.  Please use partedUtil *** Found valid GPT with protective MBR; using GPT Disk /vmfs/devices/disks/naa.50015179590f8c3f: 312581808 sectors,  298M Logical sector size: 512 Disk identifier (GUID): 19700107-395f-4915-a262-03dda5266ea5 Partition table holds up to 128 entries First usable sector is 34, last usable sector is 312581774 Number  Start (sector)    End (sector)  Size       Code  Name    1            2048       312581774        298M   0700 ~ #
The OS will be Windows 8.1. I am also talking of physical RDM and not LUN RDM. Essentially, my test box does not have passthrough. However, I wish to give the VM the full SSD through RDM with t... See more...
The OS will be Windows 8.1. I am also talking of physical RDM and not LUN RDM. Essentially, my test box does not have passthrough. However, I wish to give the VM the full SSD through RDM with the hope of having TRIM supported.
Thanks for the reply. I am referring more about mapping a locale SATA SSD as RDM. I have done this before with regular hard drives with full SMART support and the likes. So, now K wonder whether... See more...
Thanks for the reply. I am referring more about mapping a locale SATA SSD as RDM. I have done this before with regular hard drives with full SMART support and the likes. So, now K wonder whether a TRIM command would be supported.