Skip to main content
Skip table of contents

Performance analytics case study: using a single statistic

Overview

The Delphix Engine uses the Network File System (NFS) as the transport for Oracle installations. An increase in the NFS latency could be causing sluggishness in your applications running on top of Virtual Databases. This case study illustrates how this pathology can be root caused using the analytics infrastructure. This performance investigation uses one statistic to debug the issue, and utilizes the many axes of that statistic to filter down the probable cause of the issue. This technique uses an approach of iteratively drilling down by inspecting new axes of a single statistic and filtering the data to only include information about the operations that appear slow. This technique is valuable for determining which use patterns of a resource might be causing the system to be sluggish. If you isolate a performance issue using this approach but aren't sure what is causing it or how to fix it, Delphix Support can provide assistance for your investigation.

The following example inspects the statistic which provides information about NFS I/O operations on the Delphix Engine. This statistic can be collected a maximum of once every second, and the axes it can collect, among others, are:

  • latency, a histogram of wait times between NFS requests and NFS responses

  • size, a histogram of the NFS I/O sizes requested

  • op, whether the NFS requests were reads or writes

  • client, the network address of the NFS client which was making requests

Roughly the same performance information can be obtained from the iSCSI interface as well.

Investigation

Because the NFS layer sits above the disk layer, all NFS operations that use the disk synchronously (synchronous writes and uncached reads) will have latencies that are slightly higher than those of their corresponding disk operations. Usually, because disks have very high seek times compared to the time the NFS server spends on CPU, disk operations are responsible for almost all of the latency of these NFS operations. In the graphical representation, you can see this by looking at how the slower cluster of NFS latencies (around 2ms-8ms) have similar latencies to the median of the disk I/O (around 2ms-4ms). Another discrepancy between the two plots is that the number of disk operations is much lower than the corresponding number of NFS operations. This is because the Delphix filesystem batches together write operations to improve performance.

If database performance is not satisfactory and almost all of the NFS operation time is spent waiting for the disks, it suggests that the disk is the slowest piece of the I/O stack. In this case, disk resources (the number of IOPS to the disks, the free space on the disks, and the disk throughput) should be investigated more thoroughly to determine if adding more capacity or a faster disk would improve performance. However, care must be taken when arriving at these conclusions, as a shortage of memory or a recently-rebooted machine can also cause the disk to be used more heavily due to fewer cache hits.

Sometimes, disk operations will not make up all of the latency, which suggests that something between the NFS server and the disk (namely, something in the Delphix Engine) is taking a long time to complete its work. If this is the case, it is valuable to check whether the Delphix Engine is resource-constrained, and the most common areas of constraint internal to the Delphix Engine are CPU and memory. If either of those is too limited, you should investigate whether expanding the resource would improve performance. If no resources appear to be constrained or more investigation is necessary to convince you that adding resources would help the issue, Delphix Support is available to help debug these issues.

While using this technique, you should take care to recognize the limitations that caching places on how performance data can be interpreted. In this example, the Delphix Engine uses a caching layer for the data it stores, so asynchronous NFS writes will not go to disk quickly because they are being queued into larger batches, and cached NFS reads won't use the disk at all. This causes these types of NFS operations to return much more quickly than any disk operations are able to, resulting in a very large number of low-latency NFS operations in the graph above. For this reason, caching typically creates a bimodal distribution in the NFS latency histograms, where the first cluster of latencies is associated with operations that only hit the cache, and the second cluster of latencies is associated with fully or partially uncached operations. In this case, cached NFS operations should not be compared to the disk latencies because they are unrelated. It is possible to use techniques described in the first example to filter out some of the unrelated operations to allow a more accurate mapping between disk and NFS latencies.

  1. Begin the performance investigation by examining some high-level statistics such as latency.

    1. Create a slice with statistic type NFS_OPS.

    2. Set the slice to collect the latency axis.

    3. Do not add any constraints.

    4. Set the collection interval. Anything over one second will work, but ten seconds gives good data resolution and will not use a lot of storage to persist the data that is collected. The rest of this example will assume a collection period of ten seconds for all other slices, but any value could be used.

      CODE
      /analytics
      
      create
      set name=step1
      set statisticType=NFS_OPS
      set collectionInterval=10
      set collectionAxes=latency
      commit

      This will collect a time-series of histograms describing NFS latencies as measured from inside the Delphix Engine, where each histogram shows how many NFS I/O operations fell into each latency bucket during every ten-second interval. After a short period of time, read the data from the statistic slice:

      CODE
      select step1
      getData
      setopt format=json
      commit
      setopt format=text

      The setopt steps are optional but allow you to see the output better via the CLI. The output looks like this:

      CODE
      {
          "type": "DatapointSet",
          "collectionEvents": [],
          "datapointStreams": [{
              "type": "NfsOpsDatapointStream",
              "datapoints": [{
                  "type": "IoOpsDatapoint",
                  "latency": {
                      "32768": "16",
                      "65536": "10"
                  },
                  "timestamp": "2013-05-14T15:51:40.000Z"
              }, ...]
          }],
          "resolution": 10
      }

      The data is returned as a set of datapoint streams. Streams hold the fields which are shared by all the datapoints they contain. Later on, in this example, the opt and client fields will be added to the streams, and multiple streams will be returned. Streams are described in more detail in Performance Analytics Tool Overview. The resolution field indicates the number of seconds that corresponds to each datapoint, which in our case matches the requested collectionInterval. The collectionEvents field is not used in this example, but lists when the slice was paused and resumed, to distinguish between moments when no data was collected because the slice was paused, and moments when there was no data to collect.

  2. If the latency distributions show some slow NFS operations, the next step would be to determine whether the slow operations are reads or writes.

    1. Specify a new NFS_OPS slice to collect this by collecting the op and latency axes.

    2. To limit output to the long-running operations, create a constraint on the latencyaxis that prohibits the collection of data on operations with latency less than 100ms.

      CODE
      /analytics
      
      create
      set name=step2
      set statisticType=NFS_OPS
      set collectionInterval=10
      set collectionAxes=op,latency
       
      edit axisConstraints.0
      set axisName=latency
      set type=IntegerGreaterThanConstraint
      set greaterThan=100000000
      back
       
      commit

      The greaterThan field is 100ms converted into nanoseconds.

      Reading the data proceeds in the same way as the first step, but there will be two streams of datapoints, one where op=write, and one where op=read.

      info : Because we constrained output to operations with latencies higher than 100ms, none of the latency histograms will all have any buckets for latencies lower than 100ms.

  3. After inspecting the two data streams, you might find that almost all slow operations are writes, so it could be valuable to determine which clients are requesting the slow writes, and how large each of the writes is.

    1. To collect this data, create a new NFS_OPS slice which collects the size and client axes.

    2. Add constraints ensuring that the op axis should be constrained to only collect data for write operations, and the latency axis should be constrained to filter operations taking less than 100ms.

      info : Because the constraint on the op axis dictates that it will always have the value write, it is not necessary to collect the op axis anymore.

      CODE
      /analytics
      
      create
      set name=step3
      set statisticType=NFS_OPS
      set collectionInterval=10
      set collectionAxes=size,client
       
      edit axisConstraints.0
      set type=IntegerGreaterThanConstraint
      set axisName=latency
      set greaterThan=100000000
      back
       
      edit EnumEqualConstraint
      set type=StringEqualConstraint
      set axisName=op
      set equals=write
      back
       
      commit

      Reading the data proceeds in the same way as the first two steps, but there will be one stream for every NFS client. The dataset collected by this will consist of a set of streams, one corresponding to each NFS client, and each stream will be a time-series of histograms showing write sizes that occurred during each ten-second interval.

      Continuing to use this approach will allow you to narrow down the slow writes to a particular NFS client, and you may be able to tune that client in some way to speed it up.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.