CKF1028's Posts

Hello, Lab#01: Fresh installation via VMware-VCSA-all-8.0.2-22617221.iso. Lab#02: Update from 8.0.1 via VMware-VCSA-all-8.0.2-22617221.iso. The problem has been solved.      
It's another issue. Regarding the update task, we have always used the offline method. We don't like to login ESXi or vCSA via DNS resolve. Creating a DNS host just for ESXi and vCSA is a meaningle... See more...
It's another issue. Regarding the update task, we have always used the offline method. We don't like to login ESXi or vCSA via DNS resolve. Creating a DNS host just for ESXi and vCSA is a meaningless task.   We really like simply using IP for ESXi and vCSA management.  In fact, we have used several versions (6.5, 6.7, 7.0.3, 8.0.1) and it's been working for a long time. Please don't modify this advantage, VMware's engineers.  
thanks for your reply.
Thanks for lamw's reply. ESXi and vCSA are fresh install and deployment. Btw, DNS-specific issues additional description: we never deploy DNS server for ESXi and vCSA infrastructure. There's never ... See more...
Thanks for lamw's reply. ESXi and vCSA are fresh install and deployment. Btw, DNS-specific issues additional description: we never deploy DNS server for ESXi and vCSA infrastructure. There's never been any problems with the version (6.5, 6.7, 7.0.3, 8.0.1) we had installed.  
Hello, VMware-VCSA-all-8.0.2-22385739.iso VCSA setup wizard stuck at 19% stage-2:An error occurred while starting service 'vc-ws1a-broker' Tried modifying the deployment size from Tiny to Small, b... See more...
Hello, VMware-VCSA-all-8.0.2-22385739.iso VCSA setup wizard stuck at 19% stage-2:An error occurred while starting service 'vc-ws1a-broker' Tried modifying the deployment size from Tiny to Small, but it still didn't work. Does anyone have insight or advice on this ?  
Hello, After showing the routing table of netstack=STORAGE, it seems the gateway setting is not working. [root@ESXi:~] esxcli network ip interface ipv4 set -i vmk1 -t static -I 192.168.200.11 -N 25... See more...
Hello, After showing the routing table of netstack=STORAGE, it seems the gateway setting is not working. [root@ESXi:~] esxcli network ip interface ipv4 set -i vmk1 -t static -I 192.168.200.11 -N 255.255.255.0 -g 192.168.200.254 [root@ESXi:~] esxcli network ip route ipv4 list --netstack=STORAGE Network Netmask Gateway Interface Source ----------- ------------- ------- --------- ------ 192.168.200.0 255.255.255.0 0.0.0.0 vmk1 MANUAL  
Hello, Standalone host with the latest VMware ESXi 7.0.2 build-17630552. Create a new TCP/IP stack via the following command. esxcli network ip netstack add -N STORAGE esxcli network vswitch stand... See more...
Hello, Standalone host with the latest VMware ESXi 7.0.2 build-17630552. Create a new TCP/IP stack via the following command. esxcli network ip netstack add -N STORAGE esxcli network vswitch standard portgroup add -p STORAGE -v vSwitch0 esxcli network vswitch standard portgroup set -p STORRAGE -v 200 esxcli network ip interface add -i vmk1 -N STORAGE -p STORAGE esxcli network ip interface ipv4 set -i vmk1 -t static -I 192.168.200.11 -N 255.255.255.0 -g 192.168.200.254   Q1: "esxcli network ip route ipv4 list", there’s no route be added for interface vmk1. Q2: There’s a new TCP/IP stack be seen in the Host Client WebUI, but can’t edit. Would someone please to tell me how to solve this problem, thanks.  
Hello,   My question is that import/export via vSphere client or upload/download via SFTP from ESXi to my PC is very slow !!! Management vmkernel based on 1Gbps uplink, the transfer speed is only ... See more...
Hello,   My question is that import/export via vSphere client or upload/download via SFTP from ESXi to my PC is very slow !!! Management vmkernel based on 1Gbps uplink, the transfer speed is only about avg. 10Mbps. Management vmkernel based on 10Gbps uplink, the transfer speed is only about avg. 50Mbps.   Would you please tell me how to solve this issue to speed it up ? Thanks for your reply.  
Hello,   After vCSA 6.7 deploy successfully, we would like to login vSphere client (HTML5) I can't login vSphere client via https://IP-Address/ui, but via https://FQDN/ui is working.   How can I... See more...
Hello,   After vCSA 6.7 deploy successfully, we would like to login vSphere client (HTML5) I can't login vSphere client via https://IP-Address/ui, but via https://FQDN/ui is working.   How can I login vSphere Client via https://IP-Address/ui ?! Thanks for your reply.  
Hello All, After install ESXi 6.0 via "VMware-VMvisor-Installer-6.0.0.update02-4192238.x86_64-Dell_Customized-A04" on Dell PowerEdge R730, everything is okay !!! But, if I try to install ES... See more...
Hello All, After install ESXi 6.0 via "VMware-VMvisor-Installer-6.0.0.update02-4192238.x86_64-Dell_Customized-A04" on Dell PowerEdge R730, everything is okay !!! But, if I try to install ESXi 6.5 via "VMware-VMvisor-Installer-6.5.0-4564106.x86_64-Dell_Customized-A00" on Dell PowerEdge R730, my vmnic0 and vmnic1 (Intel X710 DP 10Gb DA/SFP+) are not working !!! Does anybody know the solution ? Thanks for your reply. [root@R730:~] esxcli network nic list Name    PCI Device    Driver  Admin Status  Link Status  Speed  Duplex  MAC Address        MTU  Description                                           ------  ------------  ------  ------------  -----------  -----  ------  -----------------  ----  --------------------------------------------------------- vmnic0  0000:01:00.0  i40en  Up            Up          10000  Full    f8:bc:12:05:85:d0  1500  Intel Corporation Ethernet Controller X710 for 10GbE SFP+ vmnic1  0000:01:00.1  i40en  Up            Up          10000  Full    f8:bc:12:05:85:d2  1500  Intel Corporation Ethernet Controller X710 for 10GbE SFP+ vmnic2  0000:0c:00.0  igbn    Up            Down            0  Half    f8:bc:12:05:85:f0  1500  Intel Corporation Gigabit 4P X710/I350 rNDC           vmnic3  0000:0c:00.1  igbn    Up            Up            1000  Full    f8:bc:12:05:85:f1  1500  Intel Corporation Gigabit 4P X710/I350 rNDC           vusb0  Pseudo        cdce    Up            Up            100  Full    18:fb:7b:5d:d5:ee  1500  DellTM iDRAC Virtual NIC USB Device               [root@R730:~] esxcli network nic get -n vmnic0   Advertised Auto Negotiation: false   Advertised Link Modes: 1000BaseT/Full, 10000BaseT/Full, 10000BaseT/Full, 40000BaseCR4/Full, 40000BaseSR4/Full   Auto Negotiation: false   Cable Type:   Current Message Level: -1   Driver Info:         Bus Info: 0000:01:00:0         Driver: i40en         Firmware Version: 5.04 0x800024bc 17.5.11         Version: 1.1.0   Link Detected: true   Link Status: Up   Name: vmnic0   PHYAddress: 0   Pause Autonegotiate: false   Pause RX: false   Pause TX: false   Supported Ports:   Supports Auto Negotiation: false   Supports Pause: false   Supports Wakeon: true   Transceiver:   Virtual Address: 00:50:56:52:57:61   Wakeon: MagicPacket(tm)
Hello, My hardware specification is as follows, all bought from Dell. Server:Dell PowerEdge R730 NIC:Intel X710 DP 10Gb DA/SFP+, + I350 DP 1Gb Ethernet, Network Daughter Card. After ESXi ... See more...
Hello, My hardware specification is as follows, all bought from Dell. Server:Dell PowerEdge R730 NIC:Intel X710 DP 10Gb DA/SFP+, + I350 DP 1Gb Ethernet, Network Daughter Card. After ESXi 6.0 installation, there's a issue should be solved. 1、After install ESXi 6.0 #2494585 via Dell customized image, I can see there're x2 10GbE(vmnic-0,1) and x2 1GbE(vmnic-3,4) Dell Customized Image:VMware-VMvisor-Installer-6.0.0-2494585.x86_64-Dell_Customized-A00 2、After install ESXi 6.0 #2809209 via the latest VMware image, there're no vmnic-0,1 and only show vmnic-3,4 By the way, I also try to install ESXi 5.5 #2403361 via VMware image, it's being the same problem. VMware ESXi 6.0 Image Version:VMware-VMvisor-Installer-201507001-2809209.x86_64 VMware ESXi 5.5 Image Version:VMware-VMvisor-Installer-201501001-2403361.x86_64 It seems the latest VMware image ESXi 6.0 #2809209 or ESXi 5.5 #2403361 can't recognize Intel X710 NIC. But according to VMware compatibility guide, Intel X710 had been supported by VMware from ESXi 5.1 to ESXi 6.0. Would you please give me some suggestions to solve this issue ? Thanks for your kindly reply.
The result is the same !!! :S I've tried x1 or x2 uplinks (E1000) within vESXi-01&02 vSwitch, the result is being the same. From vESXi-02 ping to vESXi-01 ping 172.16.10.111 ---> it's ok !!... See more...
The result is the same !!! :S I've tried x1 or x2 uplinks (E1000) within vESXi-01&02 vSwitch, the result is being the same. From vESXi-02 ping to vESXi-01 ping 172.16.10.111 ---> it's ok !!! ping 172.16.20.111 ---> it occurs duplicate ICMP response.
Thanks André remind me this issue, I'm using vmxnet3 not E1000 adapter. But, why vESXi-02 ping vESXi-01 management IP=172.16.10.111 didn't appear duplicate ICMP response ?! Why vESXi-02 pin... See more...
Thanks André remind me this issue, I'm using vmxnet3 not E1000 adapter. But, why vESXi-02 ping vESXi-01 management IP=172.16.10.111 didn't appear duplicate ICMP response ?! Why vESXi-02 ping vESXi-01 vmkernel IP=172.16.20.111 will appear duplicate ICMP respnse ?! I'll go to change vmxnet3 to E1000 for testing this again, and report the result here.
Both vESXi-01 and vESXi-02 have changed to x1 uplink as below, but the duplicate ICMP issue still exist.
172.16.10.0/24 for MGMT 172.16.20.0/24 for STOR
After changing both pESXi and vESXi vSwitch to default setting, the duplicate ICMP response between pESXi and vESXi be solved. But when I try to ping between vESXi, the duplicate ICMP response... See more...
After changing both pESXi and vESXi vSwitch to default setting, the duplicate ICMP response between pESXi and vESXi be solved. But when I try to ping between vESXi, the duplicate ICMP response occur. vESXi-01      Name: vSwitch0      Uplinks x2 which load balancing are setting "port ID" to pESXi vSwitch2-portgroup-MGMT      Portgroups: MGMT (IP=172.16.10.111)      Name: vSwitch1      Uplinks x2 which load balancing are setting "port ID" to pESXi vSwitch2-portgroup-STOR      Portgroups: STOR (IP=172.16.20.111) vESXi-02      Name: vSwitch0      Uplinks x2 which load balancing are setting "port ID" to pESXi vSwitch2-portgroup-MGMT      Portgroups: MGMT (IP=172.16.10.222)      Name: vSwitch1      Uplinks x2 which load balancing are setting "port ID" to pESXi vSwitch2-portgroup-STOR      Portgroups: STOR (IP=172.16.20.222) From vESXi-01 to ESXi-02 ~ # ping 172.16.10.111 PING 172.16.10.111 (172.16.10.111): 56 data bytes 64 bytes from 172.16.10.111: icmp_seq=0 ttl=64 time=0.403 ms 64 bytes from 172.16.10.111: icmp_seq=1 ttl=64 time=0.435 ms 64 bytes from 172.16.10.111: icmp_seq=2 ttl=64 time=0.461 ms ~ # ping 172.16.20.111 PING 172.16.20.111 (172.16.20.111): 56 data bytes 64 bytes from 172.16.20.111: icmp_seq=0 ttl=64 time=1.047 ms 64 bytes from 172.16.20.111: icmp_seq=0 ttl=64 time=1.141 ms (DUP!) 64 bytes from 172.16.20.111: icmp_seq=0 ttl=64 time=1.165 ms (DUP!) 64 bytes from 172.16.20.111: icmp_seq=0 ttl=64 time=1.186 ms (DUP!)
This problem had been solved, appreciate for André answers !!!  :smileygrin: vESXi vSwitch do not support teaming on the port groups, it should be setting back to "Routed based on the origin... See more...
This problem had been solved, appreciate for André answers !!!  :smileygrin: vESXi vSwitch do not support teaming on the port groups, it should be setting back to "Routed based on the originating virtual port ID"
Thanks for reply from André The switch EtherChannel configuration isn't be misconfigured, 'cause pESXi can communicate with LAN / WAN or storage devices normally. This strange problem can b... See more...
Thanks for reply from André The switch EtherChannel configuration isn't be misconfigured, 'cause pESXi can communicate with LAN / WAN or storage devices normally. This strange problem can be solved by setting x2 uplinks to x1 uplink of vESXi, then pESXi and vESXi can ping normally with each other. But, I'm really want to know whether the vSwitch of vESXi can be configured NIC teaming (IP-Hash or portID) to pESXi vSwitch or not.
Hello, My infrastructure of nested ESXi 5.1 is as below and everything is working fine.(vMotion, HA/DRS, iSCSI/NFS connect to physical storage and so on) But there's a strange problem between... See more...
Hello, My infrastructure of nested ESXi 5.1 is as below and everything is working fine.(vMotion, HA/DRS, iSCSI/NFS connect to physical storage and so on) But there's a strange problem between x1 pESXi and x2 vESXi about ICMP response, please give me some suggestions to solve it, thanks !!! x1 pESXi (Management IP=172.16.10.10)      Standard vSwitch2      Uplinks x4 which are teaming together with IP-hash to Cisco switch (etherchannel and trunk)      Portgroups x5 (MGMT, STOR, vMOT, FT, PROD) with it's own vlan id. x2 vESXi (Management IP=172.16.10.111 & 172.16.10.222) There're x5 Standard vSwitch which are configured as follows.      Name: vSwitch0      Uplinks x2 which are teaming together with IP-hash to pESXi vSwitch.      Portgroups: MGMT When pESXi ping vESXI vmkernel IP address with each other, the duplicate ICMP response will be occurred. ~ # ping 172.16.10.111 PING 172.16.10.111 (172.16.10.111): 56 data bytes 64 bytes from 172.16.10.111: icmp_seq=0 ttl=64 time=0.960 ms 64 bytes from 172.16.10.111: icmp_seq=1 ttl=64 time=0.525 ms (DUP!) 64 bytes from 172.16.10.111: icmp_seq=1 ttl=64 time=0.609 ms 64 bytes from 172.16.10.111: icmp_seq=2 ttl=64 time=0.531 ms (DUP!)
Appreciate for Frank's instruction. In brief, when we need "VDP/vSphere Replication/AD integration" features of vSphere 5.1, it must build a DNS server for working properly. Besides, if we ... See more...
Appreciate for Frank's instruction. In brief, when we need "VDP/vSphere Replication/AD integration" features of vSphere 5.1, it must build a DNS server for working properly. Besides, if we only need "vMotion/HA/DRS/FT" features of vSphere 5.1, it doesn't need to build a DNS server, right ?!