Best practices for storage
Configuration:
Virtual Disks (VMDK) with spinning or tiered media must be thick provisioned + lazy zeroed.
For storage which is 100% SSD/EFD/Flash-based, continue to thick provision, however, eager zero is not necessary.
VMDKs may be homed on VMFS or RDM storage, however, VMFS is much more common and generally preferred.
Regardless of VMFS/RDM selection, physical LUNs must have uniform characteristics (RAID, spindle count, tier) and should be thin provisioned to save time.
Storage allocated must be identical between each vSCSI controller.
The supported maximum of four virtual SCSI controllers (PVSCSI (default) or LSI Logic Parallel) must be enabled. A mix of different types of SCSI controllers is not supported within the engine.
Virtual Disks must be evenly distributed across the 4 virtual SCSI controllers. For example, 8 virtual disks should be configured as 2 disks per controller: SCSI (0:0), SCSI (0:1), SCSI (1:0), SCSI (1:1), SCSI (2:0), SCSI (2:1), SCSI (3:0), SCSI (3:1).
You don't need to account for the OS in the even distribution of disks across controllers, just pick one; the OS doesn't place a substantial load on the controller.
To provision VMDK disks over 16TB in size, the vSphere web client must be used, the win32 client will return an error.
Delphix requires 127GB for the system disk. older versions defaulted to 150GB.
Due to our unified OVA for masking, the 127GB requirement also applies to Masking Engines
VMDK for the Delphix Engine OS is often stored on the same VMFS volume as the Delphix VM definition file (aka VMX). In that case, the VMFS volume must have sufficient space to hold the Delphix VMX Configuration, the VDMK for the system disk, a swap/paging area if the memory reservation was not enabled for the Delphix Engine (or to suspend the system), and any related VMware logging.
Set ESX Storage Multipathing IO optimization (KB 2072070).
Set multipathing policy to round-robin.
Set path switching IO operation limit to 1 (default 1000).
Verify that the queue depth setting on the virtual disks used for VMware is appropriate based on the minimum of the HBA type or the underlying storage's combined queue depth for all hosts attached to the same controller
See this VMware KB article for how to check/set the queue depth
Testing:
Run Storage Performance Tool on the raw storage before any engine configuration. This is a one-time opportunity for each engine upon installation
Required maximum storage latency is < 2ms for writes and < 10ms (95th percentile) for small random reads. Minimum passing grades: 4KB/8KB reads (B-), 1MB reads (A), 1KB/128KB writes (A).
If working with the Delphix Professional Services team, we would expect to run additional baseline performance measurements via composite tools and scripts.
e.g. "Sanity Check" (Oracle) or "DiskSpd" (MSSQL)
Detail Discussion:
Before beginning any discussion on storage performance, or at the beginning of the installation, collecting the following specs from your storage administrator will assist in understanding
Vendor, Model (For example: EMC VMAX 40k, HP 3PAR StoreServ 7000)
IO latency SLO (For example 5ms 99.999%)
IOPS/GB SLO (For example: 0.68 IOPS/GB for EMC Gold-1)
Cache type and size (For example FAST cache 768GB)
Tier, #Pools; if auto – tiering; relocation schedule (For example: Gold/Silver/Auto/3 pools/etc)
Pool detail: (#) drives, RPM, Type (For example: Pool1: (20) EFD, (30) 15k SAS, Pool 2: (40) 10k SATA)
Connection (For example: 16Gb Fibre Channel, 10Gb iSCSI, 20Gb NFS)
Dedicated or Shared pool (how many applications/servers)