rcporto's Posts

I believe your storage array don't allow replication to other vendor storage array... a solution can be virtualize your HP and EMC behind a IBM SVC/V7000 and then enable replication between IBM S... See more...
I believe your storage array don't allow replication to other vendor storage array... a solution can be virtualize your HP and EMC behind a IBM SVC/V7000 and then enable replication between IBM SVC/7000.
Check if your VMs have snapshots.
Maybe you read about the new Enhanced vMotion in vSphere 5.1 that allow vMotion without shared storage. More info: Enhanced vMotion with vSphere 5.1 - Eric Sloof - NTPRO.NL
Do you will need one Update Manager for each vCenter. "If your vCenter Server system is part of a connected group in vCenter Linked Mode, and you want to use Update Manager for each vCenter Se... See more...
Do you will need one Update Manager for each vCenter. "If your vCenter Server system is part of a connected group in vCenter Linked Mode, and you want to use Update Manager for each vCenter Server system, you must install and register Update Manager instances with each vCenter Server system. You can use an Update Manager instance only with the vCenter Server system with which it is registered." Source: Understanding Update Manager
Chegou a verificar esse link: vCenter Server Appliance Greenfield deployment | Adventures in a Virtual World ?
Remember that FT has some limitations like 1 vCPU maximum... if your SQL servers requires more than 1 vCPU, FT will not be the solution.
Check this Upgrading to vCloud Director 5.1 with Existing Nested ESXi VMs | Matt Vogt
Not sure about maximum number of datastore on vCenter, but there a limit of 256 volumes per host, a total of 1024 paths per host and 32 paths per LUN.
Anyway, in case you're using SRM 5.1, reprotect is supported with VR and will only sync changes made to the .vmdk. More info: Failback of Virtual Machines in vSphere Replication
You can found documentation here: VMware vSphere Replication Documentation and about reprotect, what SRM version are you using ?
Maybe this KB can help you: VMware KB: Virtual machines appear as invalid or orphaned in vCenter Server
Hardware version 7 allows only 8 vCPU... higher hardware version supports 32 and 64 vCPUs.
Hi, Did you already checked this post http://communities.vmware.com/thread/303753 ?
Hi, Documentation says that required ports for vSphere Web Client are: 9090 vSphere Web Client HTTP. 9443 vSphere Web Client HTTPS. http://pubs.vmware.com/vsphere-50/topic/com.vmware.vs... See more...
Hi, Documentation says that required ports for vSphere Web Client are: 9090 vSphere Web Client HTTP. 9443 vSphere Web Client HTTPS. http://pubs.vmware.com/vsphere-50/topic/com.vmware.vsphere.install.doc_50/GUID-8B33C689-1501-4D87-9E80-53FF45D920F2.html
Hi sansaran, You wrote that the connection between the blade switchs and external switchs are on vlan 137, right ? If yes, this is the problem ... the connection between the the blade switc... See more...
Hi sansaran, You wrote that the connection between the blade switchs and external switchs are on vlan 137, right ? If yes, this is the problem ... the connection between the the blade switchs and external switchs must be TRUNK if you want pass traffic from different VLANs... if you put this connection on VLAN 137, the uplinks will recognize only traffic from VLAN 137... this is why when you put the vmkernel port on VLAN 137, the vmkping works. Let us know if this solution works
Hello, Check the image bellow for part of my environment: Environment summary: 02 Cisco Catalyst 4507R-E 02 IBM BladeCenter H    04 Nortel (BNT) 2/3 Copper Sw.Module (per BladeCe... See more...
Hello, Check the image bellow for part of my environment: Environment summary: 02 Cisco Catalyst 4507R-E 02 IBM BladeCenter H    04 Nortel (BNT) 2/3 Copper Sw.Module (per BladeCenter)    Blades HS22 and HS21 with 04 Broadcom NetXtreme NICs on each blade The two catalyst are connected by a port channel with 02 ten gigabit interfaces; Each BladeCenter Ethernet module are connected to catalyst by 02 external gigabit interface build a ether channel (trunking on the BNT side); We have vSphere ESX 4 Update 2 on the blades and we configured a vSwitch with 03 nic and with this NIC teaming settings: -> Load Balancing: Route base on the originating virtual port ID -> Network Failover Detection: Beacon Probing -> Notify Switches: yes -> Failback: no Everything works fine for hours, but after some time... the hosts/vms on the catalyst from the secondary site experiencing intermittent connection loss. There is no problem with catalyst switches, because we have many other BladeCenter (on bootg sites) running Windows and there is no connection loss on the Windows. I'm suspecting that NIC teaming setting on the ESX hosts are causing the network disruption... then, anyone here can suggest a NIC teaming setting for environment on the picture ?