For more information about network configuration refer to Network performance configuration options
Delphix engine <===> Target Host:Implement standard requirements for optimal NFS/ISCSI performance:
Optimal physical network topology:
Low latency: < 1ms for 8K packets.
Network adjacency: minimize network hops, co-locate in the same blade enclosure, co-locate on the same physical host.
Eliminate all Layer 3+ devices - firewalls, IDS, packet filters (Deep Packet Inspection - DPI).
Multiple switches can add latency and fragmentation and reordering issues will add significant latency.
10GbE physical uplinks or higher.
Jumbo frames (typically MTU 9000) improves network efficiency for 8K packets: lower CPU, lower latency, greater throughput.
All devices end-to-end must be configured for the larger frame size including switches, routers, fabric interconnects, hypervisors and servers.
Traffic between two VMs on the same host is limited to ~16Gbps when leveraging built-in virtual networking.
Optimal logical flow:
Disable QoS throttles limiting network utilization below line rate (e.g. HP Virtual Connect FlexFabric).
Consider a dedicated VLAN (with jumbo frames) for NFS/iSCSI traffic.
NIC Teaming (at ESX layer) of multiple physical uplinks can provide additional throughput for higher workloads.
Examples: 4 x 1Gb NICs support up to 400 MBPS IO, 2 x 10Gb NICs support up to 2 GBPS IO.
VMware KB-1004088 has NIC teaming recommendations, including
VMware KB-1001938 has host requirements for physical link aggregation (LACP, EtherChannel).
Jumbo frames check via pingNONE
Delphix Engine $ ping -D -s [Target_IP] 8000 "ICMP Fragmentation needed and DF set from gateway" indicates MTU < 8028 Linux $ ping -M do -s 8000 [Delphix_Engine_IP] "Frag needed and DF set (mtu = xxxx)" indicates MTU < 8028 MacOS ping -D -s 8000 [Delphix_Engine_IP] Note: "sudo sysctl -w net.inet.raw.maxdgram=16384" will increase the max ICMP datagram size on Mac, allowing you to use -s 9000 on MacO Windows ping -f -l 8000 [Delphix_Engine_IP] http://www.mylesgray.com/hardware/test-jumbo-frames-working/
Measure network bandwidth and latency:
Latency in both directions should be < 1ms for an 8KB payload.
Network Hops should be minimized:
traceroute (Unix/Linux) / tracert (windows).
Throughput in both directions: 50-100 MB/s on 1 GbE, 500-1000 MB/s on 10 GbE physical link.
Always use the CLI when possible.
NIC should use Auto-negotiate on Ethernet with a minimum of 1000Mbps.
Hard setting speed/duplex will limit network throughput below the line rate.
Delphix <===> Staging server(SQL Server, Sybase):
Measure latency, bandwidth for transaction log restore performance
Source <===> Delphix:
Measure latency, bandwidth to explain snapsync performance
ESX host <===> ESX host(ESX Cluster):
This is because the entire memory footprint of the Delphix VM (more precisely, the entire address space of the ESX processes that comprise the VM) must be copied to the receiving ESX host, along with all changes to that address space as they happen. Our ZFS cache comprises the bulk of that memory and changes as I/O occurs, and that rate of change is governed by the network bandwidth available to the Delphix Engine.
Attempting to live vMotion a DE with a 10Gb NIC over a 1Gb vMotion network is thus likely to fail.