Architecture checklist
Overview
The Architecture Checklist is an overview combining the more important Architectural best practices into a single list. For more detail regarding some of the reasons for our best practices, review the Architecture Checklist FAQ.
Architecture best practices for hypervisor host and VM guest - ESX
Hypervisor
ESXi 6.x or 7.0 is recommended. ESXi 5.5 or earlier is no longer supported.
HyperThreading (HT) for Intel®-based servers (no HT on AMD CPUs).
Disable HT in BIOS, on the ESXi Host, and disable HT Sharing on the Delphix VM for consistency. This is our best practice, disable at all levels. Any other combination may result in non-deterministic performance.
When HT cannot be turned off for both the Host and Delphix VM, it should be turned on at all levels, not run in a "mixed-mode".
HT disablement at a guest level only can result in non-deterministic performance.
A dedicated ESXi host, cluster or DRA is recommended where consistent VDB performance is paramount.
VM migration to a new host (e.g. VMware HA or vMotion) can create mismatched HT settings.
ESXi overhead (resources required for hypervisor cannot be reserved, they must be left unallocated).
Memory Overhead - 10% of available RAM must not be allocated to guest VMs.
Example: 256GB RAM, allocate 230GB to Delphix VM, leave 26GB for ESX
CPU Overhead - At least 2 cores (ideally 4) must not be allocated to guest VMs.
Example: If 16 physical cores are available, allocate 12 to the virtual machines, leaving 4 for the hypervisor
Why 4? Certain hypervisor functions require precedence over any virtualized system. If a hypervisor needs more CPU than the amount currently available, it can de-schedule all other virtual processes to ensure adequate CPU resources for the hypervisor. Ensuring the hypervisor will not have to de-schedule any running virtual processes (worlds) by setting aside and not over-subscribing CPUs for virtual functions will leave them available for hypervisor use.
Even if the Delphix VM is the only VM on a host, the hypervisor is still active and essential; and still needs resources.
BIOS Power Management should be set to High Performance where ESXi controls power management.
Can be impacted by VMware KB 1018206 - poor VM application performance caused by power management settings.
Ensure that all BIOS managed C-States other than C0 are disabled if power management is hardware controlled.
Ensure that all ACPI sleep states above S0 are disabled in the BIOS.
Examples for popular server lines from Cisco, HP, Dell below. Specific models will vary, use the appropriate spec sheet.
UCS: disable the Processor Power States, disable Power Technology, set Energy Performance to "Performance"
HP Proliant: set HP Power Regulator to HP "Static High Performance" mode
Dell: set BIOS System Profile to "Performance Optimized" mode
VMware HA can be enabled; VMware DRS is generally disabled.
Blade/Rack Server Firmware and ESXi Drivers should be updated to latest versions.
For Intel®-based servers with E5-2600 v2 processors.
Two typical server configurations:
Blade Farm
Rack Server Hyper-Converged configurations are possible for high performance.
Virtual Machine Guest
For VM machine settings, see Virtual Machine Requirements for VMware Platform.
VMWare Guest Specifications:
Minimum: 8 vCPU x 64 GB Small: 8 vCPU x 128GB Medium: 16 vCPU x 256 GB Large: 24 vCPU x 512 GB
Reserve 100% of RAM and CPU:
If the ESX host is dedicated to Delphix, CPU and RAM reservations are advised but not necessary, however, swap space will be required on the hypervisor to compensate for the lack of reserved RAM.
Hyperthreading - See the ESX host section at the top. Disable HT Sharing on VM, disable HT on ESX Host.
Assign single-core sockets for vCPUs in all cases. If there is a compelling reason to use multi-core CPUs, reference the following article from VMware which describes matching virtual multi-core sockets to the hardware ESX is running on. VMware Article on CoresPerSocket Example: ESXi Host has 2 socket x 18 core Intel Xeon, Delphix Engine wants 16 vCPU. Configure Delphix VM with 2 Virtual Sockets, 8 Cores Per Socket to utilize hardware architecture.
Avoid placing other extremely active VMs on the same ESX host.
Monitoring - vSphere Threshold Alerts for CPU, Network, Memory, Capacity.
To set the number of vCPUs per virtual machine via the vSphere client, please see "Virtual CPU Configuration"in the Administration guide:
Delphix VM CPU Utilization - Delphix KB article on what makes Delphix VMs similar to other resource-intensive applications
Exchange on VMware Best Practices - VMworld 2013 session
Ensure that the latest available VMware drivers and firmware versions are installed for HBAs, NICs and any other hardware components configured on the Delphix virtual machine. This is a critical step that can have a massive impact on the performance and robustness of our solution.
Architecture best practices for network configuration
For more information about network configuration refer to Network Performance Configuration Options.
Delphix Engine <===> Target Host:Implement standard requirements for optimal NFS/ISCSI performance:
Optimal physical network topology:
Low latency: < 1ms for 8K packets.
Network adjacency: minimize network hops, co-locate in the same blade enclosure, co-locate on the same physical host.
Eliminate all Layer 3+ devices - firewalls, IDS, packet filters (Deep Packet Inspection - DPI).
Multiple switches can add latency and fragmentation and reordering issues will add significant latency.
Optimal throughput:
10GbE physical uplinks or higher.
Jumbo frames (typically MTU 9000) improves network efficiency for 8K packets: lower CPU, lower latency, greater throughput.
All devices end-to-end must be configured for the larger frame size including switches, routers, fabric interconnects, hypervisors and servers.
Traffic between two VMs on the same host is limited to ~16Gbps when leveraging built-in virtual networking.
Optimal logical flow:
Disable QoS throttles limiting network utilization below line rate (e.g. HP Virtual Connect FlexFabric).
Consider a dedicated VLAN (with jumbo frames) for NFS/iSCSI traffic.
NIC Teaming (at ESX layer) of multiple physical uplinks can provide additional throughput for higher workloads.
Examples: 4 x 1Gb NICs support up to 400 MBPS IO, 2 x 10Gb NICs support up to 2 GBPS IO.
VMware KB-1004088 has NIC teaming recommendations, including
route-
based-on-IP-hash
policy.VMware KB-1001938 has host requirements for physical link aggregation (LACP, EtherChannel).
VMware KB-1007371, popular blog post details problems with NIC selection using
dest-IP
hash.
Fragmentation and dropped packets can result in excessive retransmissions of data, reducing throughput.
On AIX, LSO (Large Send Offload) and LRO (Large Receive Offload) network features have caused many problems. The virtualization engine employs LSO only, but no LRO, AIX employs both. These features are enabled by default. The best practice is to disable LSO on the VE when the target is AIX.
LSO enabled on the Virtualization engine can result in dramatically limited or blocked transmit throughput (and increase re-transmission traffic).
When wishing to disable all, LSO should be disabled on the Delphix Virtualization Engine, while both LRO and LSO should be disabled on guest VM of AIX Target. The specific commands to disable LSO and LRO on AIX are not captured here and appear to be context/version-specific.
Jumbo frames check via ping
CODEDelphix Engine $ ping -D -s [Target_IP] 8000 "ICMP Fragmentation needed and DF set from gateway" indicates MTU < 8028 Linux $ ping -M do -s 8000 [Delphix_Engine_IP] "Frag needed and DF set (mtu = xxxx)" indicates MTU < 8028 MacOS ping -D -s 8000 [Delphix_Engine_IP] Note: "sudo sysctl -w net.inet.raw.maxdgram=16384" will increase the max ICMP datagram size on Mac, allowing you to use -s 9000 on MacO Windows ping -f -l 8000 [Delphix_Engine_IP] http://www.mylesgray.com/hardware/test-jumbo-frames-working/
Measure Network Bandwidth and Latency:
Latency in both directions should be < 1ms for an 8KB payload.
Network Hops should be minimized:
traceroute (Unix/Linux) / tracert (windows).
Throughput in both directions: 50-100 MB/s on 1 GbE, 500-1000 MB/s on 10 GbE physical link.
Always use the CLI when possible.
NIC should use Auto-negotiate on Ethernet with a minimum of 1000Mbps.
Hard setting speed/duplex will limit network throughput below the line rate.
Delphix <===> Staging Server(SQL Server, Sybase):
Measure latency, bandwidth for transaction log restore performance
Source <===> Delphix: Measure latency, bandwidth to explain snapsync performance
ESX host <===> ESX host(ESX Cluster):
This is because the entire memory footprint of the Delphix VM (more precisely, the entire address space of the ESX processes that comprise the VM) must be copied to the receiving ESX host, along with all changes to that address space as they happen. Our ZFS cache comprises the bulk of that memory and changes as I/O occurs, and that rate of change is governed by the network bandwidth available to the Delphix Engine.
Attempting to live vMotion a DE with a 10Gb NIC over a 1Gb vMotion network is thus likely to fail.
Architecture best practices for storage
Configuration:
Virtual Disks (VMDK) with spinning or tiered media must be thick provisioned + lazy zeroed.
For storage which is 100% SSD/EFD/Flash-based, continue to thick provision, however, eager zero is not necessary.
VMDKs may be homed on VMFS or RDM storage, however, VMFS is much more common and generally preferred.
Regardless of VMFS/RDM selection, physical LUNs must have uniform characteristics (RAID, spindle count, tier) and should be thin provisioned to save time.
Storage allocated must be identical between each vSCSI controller.
The supported maximum of four virtual SCSI controllers (PVSCSI (default) or LSI Logic Parallel) must be enabled. A mix of different types of SCSI controllers is not supported within the engine.
Virtual Disks must be evenly distributed across the 4 virtual SCSI controllers. For example, 8 virtual disks should be configured as 2 disks per controller: SCSI (0:0), SCSI (0:1), SCSI (1:0), SCSI (1:1), SCSI (2:0), SCSI (2:1), SCSI (3:0), SCSI (3:1).
You don't need to account for the OS in the even distribution of disks across controllers, just pick one; the OS doesn't place a substantial load on the controller.
To provision VMDK disks over 16TB in size, the vSphere web client must be used, the win32 client will return an error.
As of 5.1.3, we require 127GB for the system disk. older versions defaulted to 150GB.
Due to our unified OVA for masking, the 127GB requirement also applies to Masking Engines
VMDK for the Delphix Engine OS is often stored on the same VMFS volume as the Delphix VM definition file (aka VMX). In that case, the VMFS volume must have sufficient space to hold the Delphix VMX Configuration, the VDMK for the system disk, a swap/paging area if the memory reservation was not enabled for the Delphix Engine (or to suspend the system), and any related VMware logging.
Set ESX Storage Multipathing IO optimization (KB 2072070).
Set multipathing policy to round-robin.
Set path switching IO operation limit to 1 (default 1000).
Verify that the queue depth setting on the virtual disks used for VMware is appropriate based on the minimum of the HBA type or the underlying storage's combined queue depth for all hosts attached to the same controller
See this VMware KB article for how to check/set the queue depth
Testing:
Run Storage Performance Tool on the raw storage before any engine configuration. This is a one-time opportunity for each engine upon installation
Required maximum storage latency is < 2ms for writes and < 10ms (95th percentile) for small random reads. Minimum passing grades: 4KB/8KB reads (B-), 1MB reads (A), 1KB/128KB writes (A).
If working with the Delphix Professional Services team, we would expect to run additional baseline performance measurements via composite tools and scripts.
e.g. "Sanity Check" (Oracle) or "DiskSpd" (MSSQL)
Detail Discussion:
Before beginning any discussion on storage performance, or at the beginning of the installation, collecting the following specs from your storage administrator will assist in understanding
Vendor, Model (For example: EMC VMAX 40k, HP 3PAR StoreServ 7000)
IO latency SLO (For example 5ms 99.999%)
IOPS/GB SLO (For example: 0.68 IOPS/GB for EMC Gold-1)
Cache type and size (For example FAST cache 768GB)
Tier, #Pools; if auto – tiering; relocation schedule (For example: Gold/Silver/Auto/3 pools/etc)
Pool detail: (#) drives, RPM, Type (For example: Pool1: (20) EFD, (30) 15k SAS, Pool 2: (40) 10k SATA)
Connection (For example: 16Gb Fibre Channel, 10Gb iSCSI, 20Gb N
Architecture best practices for Delphix engine data protection
For protection against physical host failure - leverage VMware HA.
For protection against storage failure - leverage Delphix replication.
For protection against administrative error - leverage storage snapshots.
For protection against site failure - leverage Delphix replication and/or Delphix Live Archive.
Infrastructure Backup of the Delphix VM: Must take a consistent group snapshot of all Delphix VM storage (system disk, VM configuration, database VMDK/RDMs) RTO, RPO are inferior compared to Delphix Replication. The granularity of restore is at the VM level: all-or-nothing.
Virtual Machine Backup: Create VM snapshot, backup (proxy server), remove the snapshot. Products use VMware APIs: NBU for VMware, TSM, Networker Limited to < 2TB because of time to backup, impact on running VM
Storage Array Backup: Take a consistent storage snapshot, replicate to tape/VTL media server, remove the snapshot. Products include Hitachi Shadow Copy, EMC SnapCopy, HP Business/Snap Copy.
Architecture best practices for Masking
As of release 5.0, the virtualization and masking functions are combined into a single OVA and require additional consideration for installation and configuration. Additionally, support for remote Continuous Compliance Engine calls has been implemented and is supported in 5.0.4 and above.
Continuous Compliance Engines should continue to be deployed to hosts dedicated to that function.
Possible exceptions to this would be when any virtualization needed is extremely low and unlikely to be heavily impacted by masking requirements.
The Masking Engine is disabled by default.
The Delphix Engine will continue to remain running (but unused) on a Masking-only VM.
The standard configuration for a dedicated Continuous Compliance Engine:
8 vCPUs
16GB RAM minimum, 32 GB RAM or more recommended.
300 GB storage for the OS/system root disk is required for the OS (5.1.4 and greater).
50 GB storage for the data disk must be added during the initial configuration via the Engine Setup wizard. (the engine will not complete its setup without a separate data disk).
If a bulk operation is used, allocate extra space equivalent to the size of all datasets (tables) that will be masked concurrently.
As a rule of thumb: Disk Space Required for Bulk = (( Total Database Size * .66 ) * .10) where Raw data = Total Database Size * .66
10% change is an estimate based on our experience for data we mask. Often it is lower but there are exceptions such as masking a data warehouse with a large fact table and a bunch of much smaller tables.
The VMDK for the engine OS is often stored on the same VMFS volume as the VM definition file (aka VMX). In that case, the VMFS volume must have sufficient space to hold the VMX Configuration, the VDMK for the system disk, and any VMWare logging.
Additional VMFS space for swap/paging is required if RAM reservations are not enabled. (The VM will not start if reservations are lacking and disk space is not available for swap)
CPU utilization
One vCPU per concurrent masking job is considered a best practice.
Dependent on the algorithms used: Some are calculations such as ones using AES encryption and others are lookups and tend to do more I/O.
Memory utilization
The Continuous Compliance Engine uses its memory to cache data. More memory will provide better performance. 1GB per masking job is considered a best practice.
Dependent on memory settings in the Masking Engine and JVMs. An increase in parallel workloads will require more memory. Data is either cached directly or using Kettle so the larger the lookups for algorithms the more memory required. This is the first thing to look at for performance issues.
Network and I/O
Continuous Compliance leverages the Target DB server and VDB for most of the workload. This means the masking engine can be I/O bound waiting for the DB server. As long as the masking engine can read the data faster than it can process it this is not an issue. Slow networks with numerous hops between the DB server and the Masking server can cause performance problems. Co-locating the masking server with the DB server is recommended in these cases.
Masking VDB tuning
Always start with the tuning recommendations for Target servers and VDBs first. If the VDB is not performing well, the performance of masking will suffer.
For Oracle, it is critical to select no archivelog mode and tune online redo log size at provision time.
For SQL Server, the VDB should be in SIMPLE recovery with an appropriate log file and TempDB sizes.
Backup of a continuous compliance engine
Virtual machine backups are recommended for versions of software in which masking runs in its own VM – in other words, the masking VM is separate from the VM(s) where virtualization takes place.
If an engine is supporting both masking and virtualization, review data protection best practices.
Although XML exports of inventories and environments do exist, they are incomplete. Do not rely on them.
In-Place (not On-the-Fly) Masking is the primary use case.
The following recommendations apply to the Continuous Compliance Engine versions 5.0.2 and earlier:
Jobs vs. Streams:
If there are multiple tables to be masked concurrently, use multiple, separate jobs – one per table.
Avoid multiple streams due to internal limitations.
The default setting for streams is 20; set it to 1. This will force serialization (one table at a time) if the job contains multiple tables.
2. Use one update thread per job; this avoids block collisions/contention during the UPDATE phas.
The default setting is 4; set it to 1.
a. Identify ALL indexes, constraints, and triggers on columns being masked (and only on columns being masked).
b. Evaluate whether it is better to drop/mask/recreate for indexes, or disable/mask/re-enable for triggers and constraints – as compared to leaving in-place during masking. The best choice depends on the situation
3. For Oracle VDBs, use ROWID for SQL UPDATE of masked row value(s).
Edit the ruleset, select Edit All Logical Keys, enter ROWID as the logical key value; see Managing Rule Sets. When a single, large, non-partitioned table must be masked by concurrent jobs (each masking a subset of the table) to shorten masking elapsed time, segregate jobs by database block/page to avoid contention and locking conflicts. In other words, each job masks a unique set of blocks; each block is masked exclusively by one job.
If a single table is being masked by multiple, concurrent Jobs, and Indexes/constraints/triggers must be dropped/recreated or disabled/enabled, these must be performed OUTSIDE of masking Jobs.
Pre-masking and post-masking steps must be created manually.
Scheduling of pre-script and post-script jobs must be devised. Plan to scheduled/execute externally.
Architecture best practices for source DB and OS settings
Oracle
ARCHIVELOG must be enabled:
select log_mode from v$database.
FORCE LOGGING should be enabled to ensure VDBs are not missing data. When NOLOGGING redo is applied during provision, the resulting VDB will be missing changes. Tables with NOLOGGING changes will throw corruption errors when scanned.
Block Change Tracking should be enabled to minimize snapsync time.
Consult the documentation for Oracle Standby sources.
If the database is encrypted with Oracle TDE (Transparent Data Encryption) plan your Delphix storage requirements with the expectation of minimal compression. Customer Observation: space usage for a TDE dSource copy was 92% (2.44 TB) of the source database size (2.67 TB). A typical Oracle dSource copy for a non-TDE database consumes 40% of the source database size.
SQL server
FULL vs SIMPLE recovery mode trade-off.
The maximum size of an MSSQL database that can be linked is 256TB for Windows versions greater than 2003. The limit is 2TB for Windows 2003. (See Linking a SQL Server dSource)
Architecture best practices for target DB and OS settings
Target database application settings
Oracle:
Provision with 3 x 5GB online redo logs (minimum) to avoid pause when transaction logs wraparound.
A provision in NOARCHIVELOG mode to reduce transaction log IO. Masking, Test, QA VDBs rarely need point-in-time rewind
Always check initialization parameters inherited from a parent, remove any expensive or irrelevant parameters.
DB_CACHE_SIZE, SGA_TARGET
: set based on the target system being compared to.FILESYSTEMIO_OPTIONS
toSETALL
. Any other setting inherited from the source is probably wrong.DB_BLOCK_CHECKSUM, DB_BLOCK_CHECKING, DB_LOST_WRITE_PROTECT, DB_ULTRA_SAFE
: set to default values to minimize the impact.PARALLEL_DEGREE_POLICY
toAUTO
,PARALLEL_MAX_SERVERS
default,PARALLEL_EXECUTION_MESSAGE_SIZE
to 32768 (maximum): improve PQ performance.FAST_START_MTTR_TARGET:
drives steady write activity. Set based on the target system being compared to.Consider non-durable commits for Masking, Test, QA, UAT: set
COMMIT_WAIT = NOWAIT, COMMIT_LOGGING = BATCH
Use Oracle Direct NFS (dNFS) for 11.2.0.4+ (unstable on older releases):
Recommended documentation:
Configuration Examples and Troubleshooting blog from Helmut Hutzler
Sample oranfstab to leverage multiple network paths for Delphix VDB
Set DNFS_BATCH_SIZE = 128 (default is 4096). This is a good starting point and sufficient for most workloads.
Tune TCP stack: set
tcp_adv_win_scale
= 2 due to workaround hard-coded Oracle dNFS TCP buffer size.Check Alert Log, V$DNFS_SERVERS, V$DNFS_FILES, V$DNFS_STATS to verify proper working (sample here).
Create AWR snapshots around a reference customer workload, generate an AWR report.
AWR snap before/after workload:
SQL> exec DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT();
AWR report between the snaps:
SQL> @?/rdbms/admin/awrrpte
Generate ASH report to diagnose bottlenecks while a workload is running.
SQL> @?/rdbms/admin/ashrpt
Run synthetic benchmark
sc-workload.
Where
db file scattered read
(multiblock cached read) latency is high consult this Support KB: How to Mitigate Multi-Block Read Performance on Oracle 10gImprove distributed query performance by modifying dblinks to use local IPs instead of SCAN IPs.
NFS recommended mount options for Oracle RAC/SI: Oracle Support Note 359515.1.
Target Host OS Settings
Existing documentation on Target OS practices: Target Host OS and Database Configuration Options
HP-UX 11.31+
Async NFS Direct I/O: HP-UX requires Oracle
disk_asynch_io
turned off for filesystems
IBM AIX:
Consult IBM documentation on AIX TCP Tuning
Windows:
Anti-virus programs can impact both performance and operation. Delphix recommends anti-virus scanning exclude folders where Delphix files are maintained, in addition to the normal exclusions put in place for MSSQL operation.
Delphix Connector (aka DX Connector):
Plan 3-5GB for the Delphix Connector installation.
Windows does not yet have ssh, so Delphix developed the "DX Connector for Windows target host communication.
The connector must be installed on all Target Windows hosts.
The connector supports two modes – v1 and v2 both use the same application binaries.
The connector v1 process is used to bootstrap the v2 process on a target. This opens a DSP session back to the Delphix Engine (The same thing is done via SSH on U*nix Targets)
v2 mode is required to enable SQL hooks
The connector can always be downloaded from a local Delphix Engine at: http://<delphix_engine>/connector/DelphixConnectorInstaller.msi.
The connector is backward compatible, so it is not always necessary to upgrade it during a Delphix upgrade.
iSCSI connections:
Read the following for general awareness of iSCSI limits
In addition to the hard limits on iSCSI connections, consideration must be given to the RAM, CPU, and Network to provide sufficient resources for the load on any Target or Staging host.
To increase the iSCSI timeout on both Target and Staging hosts.
In certain circumstances, it's possible that iSCSI startup will not complete before the SQL Service attempts to start a database. In such circumstances, it can be helpful to ensure the SQL service depends on the iSCSI service.
Example: c:\> sc config "MSSQLServer" depend="Microsoft iSCSI Initiator Service"
Note that any changes to iSCSI are system-wide and could potentially impact other applications also leveraging that feature.
Enable Receive Side Scaling (RSS) on each network interface that Delphix will be connecting to.
Architecture best practices for Staging Targets
This host is called a "staging target" because it has much in common with other targets, such as the remote storage mount to the Delphix Virtualization Engine. Because it leverages remote storage over the network, the staging target only needs enough disk capacity for the OS, database application, and any relevant logs or tools. A staging target is a requirement on all platforms that Delphix supports except Oracle, but you can also use it for Oracle to replay logs leveraging validated sync.
Memory and CPU
32 GB RAM minimum
4 vCPU minimum
General guidance for staging servers (Multi-platform)
Delphix recommends dedicated Staging servers for role/architecture separation. However, any Target server can be used as Staging.
In cases where the same server is used as both Staging and Target, we strongly recommend a dedicated instance/install for staging to avoid confusion.
Delphix recommends at least one Staging server per Delphix Engine to avoid the possibility of a single point of failure across multiple engines.
If a staging server is shared among multiple Delphix Engines, please ensure that a dedicated SQL Server Instance is created for each Delphix Engine.
Configuration / performance factors:
Transaction log generation rate.
Number of VDBs.
Precise guidance on these items has not yet been defined. In general, if there is a heavy log generation rate and few VDBs, the ideal recommendation is to have at least 1 Staging Target per Delphix Engine.
Disk / Local storage
The only local storage needed is for the OS and application with default databases.
Storage for a staging database is provided from the Delphix Engine, which is mounted over the network similar to any Target host (NFS/iSCSI).
If the customer has a standard DB server build, their standard storage sizing is probably fine.
If a recommendation is still needed, suggest 30GB for OS and application and any tools needed.
Network requirements
The Staging Target engages in network data transfers between Staging and the Source backup shared location as well as between Staging and the Delphix Engine.
The Staging Target is also a Target server, and as such should have < 1ms latency to the Delphix Engine (and low latency to the Source backup, when possible).
If the change rate on the Source database(s) is > 1 Gb/sec, the recommended network bandwidth to support network transfers is 10 Gb/sec.
In cases where only 1 Gb/sec network bandwidth is available, segregation of each network is recommended to reduce network contention.
Ensure that the virtual NIC is using the standard vmxnet3 adapter and not Intel for VMWare based clients. Logical IO errors have been reported while using Intel instead of vmxnet3 adapter.
Windows and MSSQL Specific
An MSSQL Server Instance used for Staging should not be clustered.
Staging should not be hosted on Windows 2003 - extended support ended July 14, 2015. It is also the first Windows version with iSCSI support and is not ideal.
The SQL Server Instances hosted on the Staging Target should have a Maximum Memory set. Also ensure that at all times, at least 10% of total memory is available for OS operations.
Only system databases (Master/MSDB/Temp/MSDB) are kept on local storage. All other data is read/written to the Delphix Engine.
Windows iSCSI configuration and limits for v2p, target, and staging hosts.
Ensure that Receive Side Scaling (RSS) is enabled on every network interface that Delphix will be connecting to.