Skip to main content
Skip table of contents

Target host OS and database configuration options

This topic describes configuration options to maximize the performance of a target host in a Delphix Engine deployment. These network-tuning changes should improve performance for any data source

OS-specific tuning recommendations

Solaris

When exclusively using Oracle's Direct NFS Feature (dNFS), it is unnecessary to tune the native NFS client. However, tuning network parameters is still relevant and may improve performance.

Tuning the Kernel NFS client

On systems using Oracle Solaris Zones, the kernel NFS client can only be tuned from the global zone.

On Solaris, by default the maximum I/O size used for NFS read or write requests is 32K. For I/O requests larger than 32K, the I/O is broken down into smaller requests that are serialized. This may result in poor I/O performance. To increase the maximum I/O size:

  1. As superuser, add to the /etc/system file:

    * For Delphix: change the maximum NFS block size to 1M
    set nfs:nfs3_bsize=0x100000

  2. Run this command:

    # echo "nfs3_bsize/W 100000" | mdb -kw

    It is critical that the above command be executed exactly as shown, with quotations and space. Errors in the command may cause a system panic and reboot.

Tuning TCP buffer sizes

On systems using Oracle Solaris Zones, TCP parameters, including buffer sizes, can only be tuned from the global zone or in exclusive-IP non-global zones. Shared-IP non-global zones always inherit TCP parameters from the global zone.

Solaris 10

It is necessary to install a new Service Management Facility (SMF) service that will tune TCP parameters after every boot. These are samples of the files needed to create the service:

File

Installation location

dlpx-tcptune

/lib/svc/method/dlpx-tcptune

dlpx-tune.xml

/var/svc/manifest/site/dlpx-tune.xml

  1. As superuser, download the files and install them in the path listed in the Installation location in the table.

  2. Run the commands:

    # chmod 755 /lib/svc/method/dlpx-tcptune
    # /usr/sbin/svccfg validate /var/svc/manifest/site/dlpx-tune.xml
    # /usr/sbin/svccfg import /var/svc/manifest/site/dlpx-tune.xml
    # /usr/sbin/svcadm enable site/tcptune

Verify that the SMF service ran after being enabled by running the command:

# cat `svcprop -p restarter/logfile tcptune`

You should see output similar to this:

[ May 14 20:02:02 Executing start method ("/lib/svc/method/dlpx-tcptune start"). ]
Tuning TCP Network Parameters
tcp_max_buf adjusted from 1048576 to 16777216
tcp_cwnd_max adjusted from 1048576 to 4194304
tcp_xmit_hiwat adjusted from 49152 to 4194304
tcp_recv_hiwat adjusted from 128000 to 16777216
[ May 14 20:02:02 Method "start" exited with status 0. ]

Solaris 11

As superuser

Run the following commands:

# ipadm set-prop -p max_buf=16777216 tcp
# ipadm set-prop -p _cwnd_max=4194304 tcp
# ipadm set-prop -p send_buf=4194304 tcp
# ipadm set-prop -p recv_buf=16777216 tcp

Linux/Redhat/CentOs  

Tuning the Kernel NFS client

In Linux, the number of simultaneous NFS requests is limited by the Remote Procedure Call (RPC) subsystem. The maximum number of simultaneous requests defaults to 16. Maximize the number of simultaneous requests by changing the kernel tunable sunrpc.tcp_slot_table_entries value to 128.

To ensure that the interface does not drop packets because the driver is configured with one receive queue, use the following commands to view the adaptor policy/increase Rx queue length.

CODE
<LinuxHost> $ ifconfig -a
eth0      Link encap:Ethernet  HWaddr 00:22:BB:CC:DD:22
          inet addr:www.xxx.yyy.zzz  Bcast:www.xxx.yyy.zzz  Mask:255.255.255.0
          inet6 addr: feee::222:bbff:fffc:ddd/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
          RX packets:760729910 errors:0 dropped:700 overruns:0 frame:0
          TX packets:309094054 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1023150307866 (952.8 GiB)  TX bytes:190673864056 (177.5 GiB)
RHEL4 through RHEL5.6
  1. As superuser, run the following command to change the instantaneous value of simultaneous RPC commands:

    # sysctl -w sunrpc.tcp_slot_table_entries=128

  2. Edit the file /etc/modprobe.d/modprobe.conf.distand change the line:

    install sunrpc /sbin/modprobe --first-time --ignore-install sunrpc && { /bin/mount -t rpc_pipefs sunrpc / var /lib/nfs/rpc_pipefs > /dev/ null   2>&1 || :;

    to

    install sunrpc /sbin/modprobe --first-time --ignore-install sunrpc && { /bin/mount -t rpc_pipefs sunrpc / var /lib/nfs/rpc_pipefs > /dev/ null   2>&1 ; /sbin/sysctl -w sunrpc.tcp_slot_table_entries=128; }

    Improper changes to the modprobe.conf.dist file may disrupt the use of NFS on the system. Check with your system administrator or operating system vendor for assistance. Save a copy of the modprobe.conf.dist in a directory other than /etc/modprobe.d before starting.

RHEL 5.7 through RHEL 6.2
  1. As superuser, run the following command to change the instantaneous value of simultaneous RPC commands:

    # sysctl -w sunrpc.tcp_slot_table_entries=128

  2. If it doesn't already exist, create the file /etc/modprobe.d/rpcinfowith the following contents:

    options sunrpc tcp_slot_table_entries=128

RHEL 6.3 onwards

Beginning with RHEL 6.3, the number of RPC slots is dynamically managed by the system and does not need to be tuned. Although the sunrpc.tcp_slot_table_entries tuneable still exists, it has a default value of 2, instead of 16 as in prior releases. The maximum number of simultaneous requests is determined by the new tuneable, sunrpc.tcp_max_slot_table_entries which has a default value of 65535.

Tuning TCP buffer sizes

Packages should install their configuration files in /usr/lib/(distribution packages) or /usr/local/lib/(local installs). Files in /etc/ are reserved for the local administrator, who may use this logic to override the configuration files installed by vendor packages. It is recommended to prefix all filenames with a two-digit number and a dash, to simplify the ordering of the files. The following is an example approach and should be tested beforehand.

CODE
echo "Target Kernel Parameter Tunings.  This is optional but highly recommended."
echo
echo "Tuning TCP Buffer Sizes - Parameters should be as below"
echo
echo "This script takes the recommended vendor approach of creating a file in /usr/lib/sysctl.d and running \"sysctl -p\""
echo "This script will comment out identical settings in /etc/sysctl.conf, which would otherwise override the Delphix settings"
echo "Admins can at their discretion set larger values, either in sysctl.conf or /usr/lib/sysctl.d/60-sysctl.conf"
echo "----"
echo "net.ipv4.tcp_timestamps = 1"
echo "net.ipv4.tcp_sack = 1"
echo "net.ipv4.tcp_window_scaling = 1"
echo "net.ipv4.tcp_rmem = 4096 16777216 16777216"
echo "net.ipv4.tcp_wmem = 4096 4194304 16777216"
cat /dev/null > /usr/lib/sysctl.d/60-sysctl.conf
#Set NOW
NOW=$(date +"%m%d%Y%H%m%s")
echo "net.ipv4.tcp_timestamps = 1" >> /usr/lib/sysctl.d/60-sysctl.conf
echo "net.ipv4.tcp_sack = 1" >> /usr/lib/sysctl.d/60-sysctl.conf
echo "net.ipv4.tcp_window_scaling = 1" >> /usr/lib/sysctl.d/60-sysctl.conf
echo "net.ipv4.tcp_rmem = 4096 16777216 16777216" >> /usr/lib/sysctl.d/60-sysctl.conf
echo "net.ipv4.tcp_wmem = 4096 4194304 16777216" >> /usr/lib/sysctl.d/60-sysctl.conf
echo
echo "Running /sbin/sysctl -p /usr/lib/sysctl.d/60-sysctl.conf"
/sbin/sysctl -p /usr/lib/sysctl.d/60-sysctl.conf
if [ $? -ne 0 ]; then
    echo "Command \"sysctl -p /usr/lib/sysctl.d/60-sysctl.conf\" failed; aborting..."
    exit 1
fi
# Make a backup and Comment out similar lines in /etc/sysctl.conf if they exist
# sed will search for lines which are NOT comments ( ^[^#]* means starts with anything other than a "#" ) but contain the parameters either with "." or "/" notation, and add a Comment.
cp /etc/sysctl.conf /tmp/sysctl.conf.$NOW
sed -i "/^[^#]*net[\./]ipv4[\./]tcp_timestamps/s/^/#Commented out by Delphix /" /etc/sysctl.conf
sed -i "/^[^#]*net[\./]ipv4[\./]tcp_sack/s/^/#Commented out by Delphix /" /etc/sysctl.conf
sed -i "/^[^#]*net[\./]ipv4[\./]tcp_window_scaling/s/^/#Commented out by Delphix /" /etc/sysctl.conf
sed -i "/^[^#]*net[\./]ipv4[\./]tcp_rmem/s/^/#Commented out by Delphix /" /etc/sysctl.conf
sed -i "/^[^#]*net[\./]ipv4[\./]tcp_wmem/s/^/#Commented out by Delphix /" /etc/sysctl.conf
echo
echo "Running /sbin/sysctl -p /etc/sysctl.conf"
/sbin/sysctl -p /etc/sysctl.conf
if [ $? -ne 0 ]; then
    echo "Command \"sysctl -p /etc/sysctl.conf\" failed; aborting..."
    echo "We put a copy of the original at /tmp/sysctl.conf.$NOW"
    exit 1
fi

NFSv4 only - enabling recover lost locks  

RHEL 6.6 onwards

By default, the Redhat NFSv4 client does not attempt to reclaim locks that were lost due to a lease expiration event. This can cause an application to encounter unexpected EIO errors on system calls such as write. The Delphix use case requires that the NFSv4 client attempt to reclaim lost locks that were due to lease expiration. An NFS client module parameter, 'recover_lost_locks', is used to change the default behavior. Use the following command to check if the "recover_lost_locks" option is set to 1:

CODE
grep recover_lost_locks /etc/modprobe.d/*.conf

If the option is currently set to 0, change it to 1. If it is missing, proceed to the next paragraph where instruction is provided on how to add the option.

As superuser, run the following two commands to enable the NFS client to recover the lost locks feature:

CODE
# cat > /etc/modprobe.d/nfs4-locks.conf <<EOF
options nfs recover_lost_locks=1
EOF
 
# [ -d "/sys/module/nfs" ] && echo Y > /sys/module/nfs/parameters/recover_lost_locks

IBM AIX®   

AIX NFSv4 configuration requirements (7.1 and 7.2)

  1. An NFS Domain must be configured.

  2. The nfsrgyd service must be running.

  3. The NFS server IP address from the Delphix Engine must be mappable to an FQDN.

Configure the nfsv4 domain on the AIX target host

bash-3.2# chnfsdom test.com

Start the nfsrgyd service and confirm it is active

CODE
bash-3.2# startsrc -s nfsrgyd

bash-3.2# lssrc -s nfsrgyd
Subsystem         Group            PID          Status
 nfsrgyd          nfs              7536760      active

Confirm that IP address can be resolved

CODE
bash-3.2$ host 172.16.105.81
81.105.16.172.in-addr.arpa is  dcol1.delphix.com

Reference:    IBM AIX: HOW TO SETUP NFSV4 MOUNT IN CLIENT AND SERVER

Tuning the Kernel NFS Client

On AIX, by default the maximum I/O size used for NFS read or write requests is 64K. When Oracle does I/O larger than 64K, the I/O is broken down into smaller requests that are serialized. This may result in poor I/O performance. IBM can provide an Authorized Program Analysis Report (APAR) that allows the I/O size to be configured to a larger value.

  1. Determine the appropriate APAR for the version of AIX you are using:

    AIX Version

    APAR Name

    6.1IV24594
    7.1IV24688
  2. Check if the required APAR is already installed by running this command:If the APAR is installed, you will see a message similar to this:

    # /usr/sbin/instfix -ik IV24594

  3. If the APAR is not yet installed, you will see a message similar to this:

    All filesets for IV24594 were found.

    There was no data for IV24594 in the fix database.

  4. Download and install the APAR, as necessary. To find the APARs, use the main search function at http://www.ibm.com/us/en/, specifying the name of the APAR you are looking for from step 1. A system reboot is necessary after installing the APAR.

  5. Configure the maximum read and write sizes using the commands below:

    # nfso -p -o nfs_max_read_size=524288
    # nfso -p -o nfs_max_write_size=524288

  6. Confirm the correct settings using the command:
    You should see an output similar to this:

    # nfso -L nfs_max_read_size -L nfs_max_write_size
    NAME CUR DEF BOOT MIN MAX UNIT TYPE
      DEPENDENCIES
    --------------------------------------------------------------------------------
    nfs_max_read_size 512K 64K 512K 512 512K Bytes D
    --------------------------------------------------------------------------------
    nfs_max_write_size 512K 64K 512K 512 512K Bytes D
    --------------------------------------------------------------------------------

Tuning delayed TCP acknowledgements

By default, AIX implements a 200ms delay for TCP acknowledgments. However, it has been found that disabling this behavior can provide significant performance improvements.

To disable delayed ACKs on AIX the following command can be used:

# /usr/sbin/no -o tcp_nodelayack=1

To make the change permanent use:

# /usr/sbin/no -p -o tcp_nodelayack=1 

HP-UX

Tuning the Kernel NFS client

On HP-UX, by default the maximum I/O size used for NFS read or write requests is 32K. For I/O requests larger than 32K, the I/O is broken down into smaller requests that are serialized. This may result in poor I/O performance.

  1. As superuser, run the following command:

    # /usr/sbin/kctune nfs3_bsize=1048576

  2. Confirm the changes have occurred and are persistent by running the following command and checking the output:

    # grep nfs3 /stand/system
    tunable nfs3_bsize 1048576

Tuning TCP buffer sizes

  1. As superuser, edit the /etc/rc.config.d/nddconf file, adding or replacing the following entries:

    TRANSPORT_NAME[0]=tcp
    NDD_NAME[0]=tcp_recv_hiwater_def
    NDD_VALUE[0]=16777216
    #
    TRANSPORT_NAME[1]=tcp
    NDD_NAME[1]=tcp_xmit_hiwater_def
    NDD_VALUE[1]=4194304

In this example, the array indices are shown as  0  and  1.In the actual configuration file, each index used must be strictly increasing, with no missing entries. See the comments at the beginning of  /etc/rc.config.d/nddconf  for more information.

  1. Run the command:

    /usr/bin/ndd -c

  2. Confirm the settings:

    # ndd -get /dev/tcp tcp_recv_hiwater_def
    16777216
    # ndd -get /dev/tcp tcp_xmit_hiwater_def
    4194304

OS-specific tuning recommendations for windows

iSCSI tuning

These are our recommendations for the Windows iSCSI initiator configuration. Please note that the parameters below will affect all applications running on the Windows target host, so you should make sure that the following recommendations do not contradict best practices for other applications running on the host.

For details about the Windows iSCSI configuration, see Requirements for Windows iSCSI Configuration

For targets running Windows Server, the iSCSI initiator driver timers can be found at: HKLM\SYSTEM\CurrentControlSet\Control\Class\{4D36E97B-E325-11CE-BFC1-08002BE10318}\<>Please see How to Modify the Windows Registry on the Microsoft Support site for details about configuring registry settings.

Registry Value

Type

Default

Recommended

Comments

MaxTransferLength

REG_DWORD

262144

131072

This controls the maximum data size of an I/O request. A value of 128K is optimal for the Delphix Engine as it reduces the segmentation of the packets as they go through the stack.

MaxBurstLength

REG_DWORD

262144

131072

This is the negotiated maximum burst length. 128K is the optimal size for the Delphix Engine.

MaxPendingRequests

REG_DWORD

255

512

This setting controls the maximum number of outstanding requests allowed by the initiator. At most this many requests will be sent to the target before receiving a response for any of the requests.

MaxRecvDataSegmentLength

REG_DWORD

65536

131072

This is the negotiated MaxRecvDataSegmentLength.

Receive side scaling

Follow the instructions here to enable RSS: Enable Receive Side Scaling (RSS) on Staging/Target Network Interfaces

hosted on the target, such as Oracle, SQL Server, Sybase, or vFiles. They should be applied to all Delphix targets.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.