Skip to main content
Skip table of contents

Best practices for network configuration

  1. For more information about network configuration refer to Network performance configuration options

  2. Delphix engine <===> Target Host:Implement standard requirements for optimal NFS/ISCSI performance: 

    1. Optimal physical network topology:

      • Low latency: < 1ms for 8K packets.

      • Network adjacency: minimize network hops, co-locate in the same blade enclosure, co-locate on the same physical host.

      • Eliminate all Layer 3+ devices - firewalls, IDS, packet filters (Deep Packet Inspection - DPI).

      • Multiple switches can add latency and fragmentation and reordering issues will add significant latency.

    2. Optimal throughput:

      • 10GbE physical uplinks or higher.

      • Jumbo frames (typically MTU 9000) improves network efficiency for 8K packets: lower CPU, lower latency, greater throughput.

      • All devices end-to-end must be configured for the larger frame size including switches, routers, fabric interconnects, hypervisors and servers.

      • Traffic between two VMs on the same host is limited to ~16Gbps when leveraging built-in virtual networking.

    3. Optimal logical flow:

    4. NIC Teaming (at ESX layer) of multiple physical uplinks can provide additional throughput for higher workloads.

      1. Examples: 4 x 1Gb NICs support up to 400 MBPS IO, 2 x 10Gb NICs support up to 2 GBPS IO.

      2. VMware KB-1004088 has NIC teaming recommendations, including  route- based-on-IP-hash  policy.

      3. VMware KB-1001938 has host requirements for physical link aggregation (LACP, EtherChannel).

      4. VMware KB-1007371, popular blog post details problems with NIC selection using dest-IP hash.

    5. Jumbo frames check via ping

      NONE
      Delphix Engine
      $ ping -D -s [Target_IP] 8000
      "ICMP Fragmentation needed and DF set from gateway" indicates MTU < 8028
      
      Linux
      $ ping -M do -s 8000 [Delphix_Engine_IP]
      "Frag needed and DF set (mtu = xxxx)" indicates MTU < 8028
      
      MacOS
      ping -D -s 8000 [Delphix_Engine_IP]
      
      Note: "sudo sysctl -w net.inet.raw.maxdgram=16384" will increase the max ICMP datagram size on Mac, allowing you to use -s 9000 on MacO
      
      Windows
      ping -f -l 8000 [Delphix_Engine_IP]
      
      http://www.mylesgray.com/hardware/test-jumbo-frames-working/

      Copy

    6. Measure network bandwidth and latency:

      1. Latency in both directions should be < 1ms for an 8KB payload.

      2. Network Hops should be minimized: traceroute (Unix/Linux) / tracert (windows).

      3. Throughput in both directions: 50-100 MB/s on 1 GbE, 500-1000 MB/s on 10 GbE physical link.

      4. Always use the CLI when possible.

    7. NIC should use Auto-negotiate on Ethernet with a minimum of 1000Mbps. 

      • Hard setting speed/duplex will limit network throughput below the line rate.

  3. Delphix <===> Staging server(SQL Server, Sybase):

    Measure latency, bandwidth for transaction log restore performance

  4. Source <===> Delphix:
    Measure latency, bandwidth to explain snapsync performance

  5. ESX host <===> ESX host(ESX Cluster):

    1. This is because the entire memory footprint of the Delphix VM (more precisely, the entire address space of the ESX processes that comprise the VM) must be copied to the receiving ESX host, along with all changes to that address space as they happen. Our ZFS cache comprises the bulk of that memory and changes as I/O occurs, and that rate of change is governed by the network bandwidth available to the Delphix Engine.

    2. Attempting to live vMotion a DE with a 10Gb NIC over a 1Gb vMotion network is thus likely to fail.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.