Skip to main content
Skip table of contents

Best practices for storage and FAQs

This page outlines some best practices for ESXi and cloud storage configurations, in addition to points on testing, and additional details required for configuration, and an FAQ section.

Best practices for ESXi storage

  • Virtual Disks (VMDK) with spinning or tiered media must be thick provisioned and lazy zeroed.

    • For storage which is 100% SSD/EFD/Flash-based, continue to thick provision, however, eager zero is not necessary.

  • VMDKs may be homed on VMFS or RDM storage, however, VMFS is much more common and generally preferred.

  • Regardless of VMFS/RDM selection, physical LUNs must have uniform characteristics (RAID, spindle count, tier) and should be thin provisioned to save time.

  • Storage allocated must be identical between each vSCSI controller.

  • Delphix recommends starting with multiple smaller disks over fewer larger disks to facilitate easily growing and/or shrinking the storage pool if storage needs change over time. 

  • The supported maximum of four virtual SCSI controllers (PVSCSI (default) or LSI Logic Parallel) must be enabled. A mix of different types of SCSI controllers is not supported within the engine.

  • Virtual Disks must be evenly distributed across the 4 virtual SCSI controllers. For example, 8 virtual disks should be configured as 2 disks per controller: SCSI (0:0), SCSI (0:1), SCSI (1:0), SCSI (1:1), SCSI (2:0), SCSI (2:1), SCSI (3:0), SCSI (3:1).

    • You don't need to account for the OS in the even distribution of disks across controllers, just pick one; the OS doesn't place a substantial load on the controller.

  • Delphix requires 127GB for the system disk.

  • VMDK for the Delphix Engine OS is often stored on the same VMFS volume as the Delphix VM definition file (aka VMX). In that case, the VMFS volume must have sufficient space to hold the Delphix VMX Configuration, the VDMK for the system disk, a swap/paging area if the memory reservation was not enabled for the Delphix Engine (or to suspend the system), and any related VMware logging.

Best practices for Cloud storage 

Block Storage Engine configuration

  • Instance size limits network throughput, storage IOPS/throughput, and total number of attached disks. 

    • Instance and disk limits on bandwidth and IOPS as well as cost are all related variables that must be evaluated together for a given workload. Example: r5n.8xlarge limits IOPs to 30K and EBS throughput to 850 MB/s. 

    • Recommend instance sizes with high network and disk throughput, r5n.8xlarge for AWS, E32 for Azure, and N2-*-32 for GCP​.

  • Cloud disks should be same size and family/tier​.

  • Delphix recommends starting with multiple smaller disks over fewer larger disks to facilitate easily growing and/or shrinking the storage pool if storage needs change over time. 

  • Ensure all disks have adequate aggregate IOPs and throughput for the workload​.

Elastic Data Engine configuration 

  • Instance size limits network throughput, storage IOPS/throughput, and total number of attached disks. 

    • Instance and disk limits on bandwidth and IOPS as well as cost are all related variables that must be evaluated together for a given workload. Example: r5n.8xlarge limits IOPs to 30K and EBS throughput to 850 MB/s. 

    • Recommend instance sizes with high network and disk throughput, r5n.8xlarge for AWS, E32 for Azure, and N2-*-32 for GCP​.

  • Cloud disks should be same size and family/tier​.

  • Delphix recommends starting with multiple smaller disks over fewer larger disks to facilitate easily growing and/or shrinking the storage pool if storage needs change over time. 

  • Initial size ~50% of all Dsources and add/remove to achieve optimal performance​.

  • Ensure all disks have adequate aggregate IOPs and throughput for the workload​.

Testing

  • Run Storage Performance Tool on the raw storage before any engine configuration. This is a one-time opportunity for each engine upon installation.

  • Required maximum storage latency is < 2ms for writes and < 10ms (95th percentile) for small random reads. Minimum passing grades: 4KB/8KB reads (B-), 1MB reads (A), 1KB/128KB writes (A).

  • If working with the Delphix Professional Services team, we would expect to run additional baseline performance measurements via composite tools and scripts.

  • e.g. "Sanity Check" (Oracle) or "DiskSpd" (MSSQL).

Detail Discussion

Before beginning any discussion on storage performance, or at the beginning of the installation, collecting the following specs from your storage administrator will assist in understanding.

  • Vendor, Model (For example: EMC VMAX 40k, HP 3PAR StoreServ 7000)

  • IO latency SLO (For example 5ms 99.999%)

  • IOPS/GB SLO (For example: 0.68 IOPS/GB for EMC Gold-1)

  • Cache type and size (For example FAST cache 768GB)

  • Tier, #Pools; if auto – tiering; relocation schedule (For example: Gold/Silver/Auto/3 pools/etc)

  • Pool detail: (#) drives, RPM, Type (For example: Pool1: (20) EFD, (30) 15k SAS, Pool 2: (40) 10k SATA)

  • Connection (For example: 16Gb Fibre Channel, 10Gb iSCSI, 20Gb NFS)

  • Dedicated or Shared pool (how many applications/servers)

FAQs

  • Why does Delphix require 127GB of storage for the OS? 

    • The system partition requires space to store and operate the OS, as well as application logs, upgrade and rollback images, and enough free space to store a kernel or application core dump should it be required. 

  • Why does Delphix require our LUNS to be uniform and contain an equal quantity and capacity of VMDKs, yet thin provisioning is OK? 

    • The engine leverages parallel reads, so the storage capacity and quantity of disks they hold must be consistent. This allows the reads and writes to be evenly distributed and minimizes the impact of potential utilization imbalances, which would create a “long tail” of higher latency on a single controller, impacting the entire engine.

    • Data storage LUNS are generally formatted with a VMFS file system and have placed upon them a virtual disk (VMDK) which is thick provisioned and eager zeroed, so it would be a waste of time to thick provision the LUN also. 

  • Why does Delphix require < 10ms latency (95th percentile) storage? 

    • Storage latency is especially important in database environments. Average latency doesn’t give a complete picture of responsiveness, especially because Delphix leverages parallel reads; so inconsistent performance (e.g. good average latency but a “long tail”) can impact multiple operations. This is why Delphix has a focus on 95th percentile latency, and why we validate storage performance as the first step when a new engine is deployed. 

  • Why does Delphix prefer to extend existing storage rather than simply add more while maintaining equal distribution? 

    • While it is possible to add more storage and maintain the practice that "storage should be equal across controllers" – extending LUNS (then virtual disks) ensures that: 

      • Delphix does not continue to fill disks which may be full, or continue to fragment already highly fragmented disks.

      • Existing disks do not suffer a write performance penalty from low capacity.

      • Storage performance is consistent.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.